It took a few more years before I actually got around to learning it and I have to say I've never picked up a language so quickly. (Which makes sense, it's got the smallest language spec of any of them)
I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
I don't see it. Can you say what 80% you feel like you're getting?
The type system doesn't feel anything alike, I guess the syntax is alike in the sense that Go is a semi-colon language and Rust though actually basically an ML deliberately dresses as a semi-colon language but otherwise not really. They're both relatively modern, so you get decent tooling out of the box.
But this feels a bit like if somebody told me that this new pizza restaurant does a cheese pizza that's 80% similar to the Duck Ho Fun from that little place near the extremely tacky student bar. Duck Ho Fun doesn't have nothing in common with cheese pizza, they're both best (in my opinion) if cooked very quickly with high heat - but there's not a lot of commonality.
I read it as “80% of the way to Rust levels of reliability and performance.” That doesn’t mean that the type system or syntax is at all similar, but that you get some of the same benefits.
I might say that, “C gets you 80% of the way to assembly with 20% of the effort.” From context, you could make a reasonable guess that I’m talking about performance.
Rust beats Go in performance.. but nothing like how far behind Java, C#, or scripting languages (python, ruby, typescript, etc..) are from all the work I've done with them. I get most of the performance of Rust with very little effort a fully contained stdlib/test suite/package manger/formatter/etc.. with Go.
I can only think of two production bugs I've written in Rust this year. Minor bugs. And I write a lot of Rust.
The language has very intentional design around error handling: Result<T,E>, Option<T>, match, if let, functional predicates, mapping, `?`, etc.
Go, on the other hand, has nil and extremely exhausting boilerplate error checking.
Honestly, Go has been one of my worst languages outside of Python, Ruby, and JavaScript for error introduction. It's a total pain in the ass to handle errors and exceptional behavior. And this leads to making mistakes and stupid gotchas.
I'm so glad newer languages are picking up on and copying Rust's design choices from day one. It's a godsend to be done with null and exceptions.
I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust. I need it for my smaller tasks and scripting. Swift is kind of nice, but it's too Apple centric and hard to use outside of Apple platforms.
I'm honestly totally content to keep using Rust in a wife variety of problem domains. It's an S-tier language.
It could as well be Haskell :) Only partly a joke: https://zignar.net/2021/07/09/why-haskell-became-my-favorite...
> Go... extremely exhausting boilerplate error checking
This actually isn't correct. That's because Go is the only language that makes you think about errors at every step. If you just ignored them and passed them up like exceptions or maybe you're basically just exchanging handling errors for assuming the whole thing pass/fail.
If you you write actual error checking like Go in Rust (or Java, or any other language) then Go is often less noisy.
It's just two very different approaches to error handling that the dev community is split on. Here's a pretty good explanation from a rust dev: https://www.youtube.com/watch?v=YZhwOWvoR3I
Rust forces you to think about errors exactly as much, but in the common case of passing it on it’s more ergonomic.
OCaml is pretty much that, with a very direct relationship with Rust, so it will even feel familiar.
You can make about anything faster if you provide more memory to store data in more optimized formats. That doesn't make them faster.
Part of the problem is that Java in the real world requires an unreasonable number of classes and 3rd party libraries. Even for basic stuff like JSON marshaling. The Java stdlib is just not very useful.
Between these two points, all my production Java systems easily use 8x more memory and still barely match the performance of my Go systems.
As for the stdlib, Go's is certainly impressive but come on, I wouldn't even say that in general case Java's standard library is smaller. It just so happens that Go was developed with the web in mind almost exclusively, while Java has a wider scope. Nonetheless, the Java standard library is certainly among the bests in richness.
It’s your preference to prefer one over the other, I prefer Java’s standard library because atleast it has a generic Set data structure in it and C#’s standard library does have a JSON parser.
I don’t think discussions about what is in the standard library really refutes anything about Go being within the same performance profile though.
[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
I know on HotSpot they’re planning to make G1 the default for every situation. Even where it would previously choose the serial GC.
The thing people tend to overvalue is the little syntax differences, like how Scala wanted to be a nicer Java, or even ObjC vs Swift before the latter got async/await.
I'm convinced no more than a handful of humans understand all of C# or C++, and inevitably you'll come across some obscure thing and have to context switch out of reading code to learn whatever the fuck a "partial method" or "generic delegate" means, and then keep reading that codebase if you still have momentum left.
https://262.ecma-international.org/16.0/index.html
I don't agree. (And frankly don't like using JS without at least TypeScript.)
It has some strange or weirdly specified features (ASI? HTML-like Comments?) and unusual features (prototype-based inheritance? a dynamically-bounded this?), but IMO it's a small language.
- Regular expressions - not just in the "standard library" but in the syntax.
- An entire module system with granular imports and exports
- Three different ways to declare variables, two of which create temporal dead zones
- Classes with inheritance, including private properties
- Dynamic properties (getters and setters)
- Exception handling
- Two different types of closures/first class functions, with different binding rules
- Async/await
- Variable length "bigint" integers
- Template strings
- Tagged template literals
- Sparse arrays
- for in/for of/iterators
- for await/async iterators
- The with statement
- Runtime reflection
- Labeled statements
- A lot of operators, including bitwise operators and two sets of equality operators with different semantics
- Runtime code evaluation with eval/Function constructor
And honestly it's only scratching the surface, especially of modern ECMAScript.
A language spec is necessarily long. The JS language spec, though, is so catastrophically long that it is a bit hard to load on a low end machine or a mobile web browser. It's on another planet.
By the time you understand all of typescript, your templating environment of choice, and especially the increasingly arcane build complexity of the npm world, you've put in hours comparable to what you'd have spent learning C# or Java for sure (probably more). Still easier than C++ or Rust though.
Modules were added in, like, 2016.
How would the proportion of humans that understand all of Rust compare?
C# is actually fairly complex. I'm not sure if it's quite at the same level as Rust, but I wouldn't say it's that far behind in difficulty for complete understanding.
[1] https://rustfoundation.org/media/ferrous-systems-donates-fer...
So while it has quite a bit of essential complexity (inherent in the design space it operates: zero overhead low-level language with memory safety), I believe it fares overall better.
Like no matter the design, a language wouldn't need 10 different kinds of initializer syntaxes, yet C++ has at least that many.
In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.
The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.
Also I miss being able to use exceptions.
I'm not saying it's awful, it's just a pretty mid language, is all.
But mid is not all that bad and Go has a compelling developer experience that's hard to beat. They just made some unfortunate choices at the beginning that will always hold it back.
The idea that it's natural and accepted that we just have python v3.11, 3.12, 3.13 etc all coexisting, each with their own incompatible package ecosystems, and in use on an ad-hoc, per-directory basis just seems fundamentally insane to me.
Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.
[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.
Yes, not even for testing. Use monkey-patching instead.
They do make some sense for swappable doodahs - like buffers / strings / filehandles you can write to - but those tend to be in the lower levels (libraries) rather than application code.
This always feels like one of those “taste” things that some programmers tend to like on a personal level but has almost no evidence that it leads to more real-world success vs any other language.
Like, people get real work done every day at scale with C# and C++. And Java, and Ruby, and Rust, and JavaScript. And every other language that programmers castigate as being huge and bloated.
I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I'm in academia doing ML research where, for all intents and purposes, we work exclusively in Python. We had a massive CSV dataset which required sorting, filtering, and other data transformations. Without getting into details, we had to rerun the entire process when new data came in roughly every week. Even using every trick to speed up the Python code, it took around 3 days.
I got so annoyed by it that I decided to rewrite it in a compiled language. Since it had been a few years since I've written any C/C++, which was only for a single class in undergrad and I remember very little of, I decided to give Go a try.
I was able to learn enough of the language and write up a simple program to do the data processing in less than a few hours, which reduced the time it took from 3+ days to less than 2 hours.
I unfortunately haven't had a chance or a need to write any more Go since then. I'm sure other compiled, GC languages (e.g., Nim) would've been just as productive or performant, but I know that C/C++ would've taken me much longer to figure out and would've been much harder to read/understand for the others that work with me who pretty much only know Python. I'm fairly certain that if any of them needed to add to the program, they'd be able to do so without wasting more than a day to do so.
I can imagine myself grappling with a language feature unobvious to me and eventually getting distracted. Sure, there is a lot of things unobvious to me but Go is not one of them and it influenced the whole environment.
Or, when choosing the right language feature, I could end up with weighing up excessively many choices and still failing to get it right, from the language correctness perspective (to make code scalable, look nice, uniform, play well with other features, etc).
An example not related to Go: bash and rc [1]. Understanding 16 pages of Duff’s rc manual was enough for me to start writing scripts faster than I did in bash. It did push me to ease my concerns about program correctness, though, which I welcomed. The whole process became more enjoyable without bashisms getting in the way.
Maybe it’s hard to measure the exact benefit but it should exist.
> As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I think those problems are related. The more features you have, the more difficult it becomes to avoid strange, surprising interactions. It’s like a pharmacist working with a patient who is taking a whole cocktail of prescriptions; it becomes a combinatorial problem to avoid harmful reactions.
That would be me. I _like_ C#, but there are elements to that language that I _never_ work with on a daily basis, it's just way too large of a language.
Go is refreshing in it's simplicity.
C++ is a basket case, it's not really a fair comparison.
Rust is great. One of the stupidest things in modern programming practice is the slapfight between these two language communities.
To add to the above comment, a lot of what Go does encourages readability... Yes it feels pedantic at moments (error handling), but those cultural, and stylistic elements that seem painful to write make reading better.
Portable binaries are a blessing, fast compile times, and the choices made around 3rd party libraries and vendoring are all just icing on the cake.
That 80 percent feeling is more than just the language, as written, its all the things that come along with it...
I keep using the analogy, that the tools are just nail guns for office workers but some people remain sticks in the mud.
This Go community that you speak of isn't bothered by writing the boilerplate themselves in the first place, though. For everyone else the LLMs provide.
For non-trivial tasks, AI is neither of those. Anything you do with AI needs to be carefully reviewed to correct hallucinations and incorporate it into your mental model of the codebase. You point, you shoot, and that's just the first 10-20% of the effort you need to move past this piece of code. Some people like this tradeoff, and fair enough, but that's nothing like a nailgun.
For trivial tasks, AI is barely worth the effort of prompting. If I really hated typing `if err != nil { return nil, fmt.Errorf("doing x: %w", err) }` so much, I'd make it an editor snippet or macro.
You missed it.
If I give a random person off the street a nail gun, circular saw and a stack of wood are they going to do a better job building something than a carpenter with a hammer and hand saw?
> Anything you do with AI needs to be carefully reviewed
Yes, and so does a JR engineer, so do your peers, so do you. Are you not doing code reviews?
If this is meant to be an analogy for AI, it doesn't make sense. We've seen what happens when random people off the street try to vibe-code applications. They consistently get hacked.
> Yes, and so does a JR engineer
Any junior dev who consistently wrote code like an AI model and did not improve with feedback would get fired.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
https://go.dev/doc/faq?utm_source=chatgpt.com#unused_variabl...
> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.
Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...
I believe the correct approach is to offer two build modes: release and debug.
Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
Release is the default, is strict and runs fast.
That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.
At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.
In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".
So it's kinda an orthogonal concern.
Whats good for the junior can be good for the senior. I think PL values have leaned a little too hard towards valuing complexity and abstract 'purity' while go was a break away from that that has proved successful but controversial.
I think my favourite bit of Go opinionatedness is the code formatting.
K&R or GTFO.
Oh you don't like your opening bracket on the same line? Tough shit, syntax error.
"This is Go. You write it this way. Not that way. Write it this way and everyone can understand it."
I wish I was better at writing Go, because I'm in the middle of writing a massive and complex project in Go with a lot of difficult network stuff. But you know what they say, if you want to eat a whole cow, you just have to pick and end and start eating.
I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs. When i started Go, it was always Rust devs in /r/programming pushing their agenda as Rust being the next best thing, the whole "rewrite everything in Rust"...
About 10 years ago, learned Rust and these days, i can barely read the code anymore with the tons of new syntax that got added. Its like they forgot the lessons from C++...
I see it as a bit like Python and Perl. I used to use both but ended up mostly using Python. They're different languages, for sure, but they work in similar ways and have similar goals. One isn't "better" than the other. You hardly ever see Perl now, I guess in the same way there's a lot of technology that used to be everywhere but is now mostly gone.
I wanted to pick a not-C language to write a thing to deal with a complex but well-documented protocol (GD92, and we'll see how many people here know what that is) that only has proprietary software implementing it, and I asked if Go or Rust would be a good fit. Someone told me that Go is great for concurrent programming particularly to do with networks, and Rust is also great for concurrent processing and takes type safety very seriously. Well then, I guess I want to pick apart network packets where I need to play fast and loose with ints and strings a bit, so maybe I'll use Go and tread carefully. A year later, I have a functional prototype, maybe close to MVP, written in Go (and a bit of Lua, because why not).
The Go folks seem to be a lot more fun to be around than the Rust folks.
But at least they're nothing like the Ruby on Rails folks.
By 20% of the effort, do you mean learning curve or productivity?
I think go is fairly small, too, but “size of spec” is not always a good measure for that. Some specs are very tight, others fairly loose, and tightness makes specs larger (example: Swift’s language reference doesn’t even claim to define the full language. https://docs.swift.org/swift-book/documentation/the-swift-pr...: “The grammar described here is intended to help you understand the language in more detail, rather than to allow you to directly implement a parser or compiler.”)
(Also, browsing golang’s spec, I think I spotted an error in https://go.dev/ref/spec#Integer_literals. The grammar says:
decimal_lit = "0" | ( "1" … "9" ) [ [ "_" ] decimal_digits ] .
Given that, how can 0600 and 0_600 be valid integer literals in the examples?) octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .No, the o/O is optional (hence in square brackets), only the leading zero is required. All of these are valid octal literals in Go:
0600 (zero six zero zero)
0_600 (zero underscore six zero zero)
0o600 (zero lower-case-letter-o six zero zero)
0O600 (zero upper-case-letter-o six zero zero)
octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .Are there other modern languages that still have that?
We shall not talk about compile time / resource usage differences ;)
I mean, Rust is nice, but compared to when i learned it like 10 years ago, it really looks a lot more these days, like it took too much of a que from C++.
While Go syntax is still the same as it was 10 years ago with barely anything new. What may anger people but even so...
The only thing i love to see is reduce executable sizes because pushing large executables on a dinky upload line, to remove testing is not fun.
Writing microservices at $DAYJOB feels far easier and less guess-work, even if it requires more upfront code, because it’s clear what each piece does and why.
It really feels like a simpler language and ecosystem compared to Python. On top of that, it performs much better!
Recently I made the same assertions as to Go's advantage for LLM/AI orchestration.
https://news.ycombinator.com/item?id=45895897
It would not surprise me that Google (being the massive services company that it is) would have sent an internal memo instructing teams not to use the Python tool chain to produce production agents or tooling and use Golang.
At work we use Uber’s NillAway, so that helps bit. https://github.com/uber-go/nilaway Though actually having the type system handle it would be nicer.
If I had a magic wand, the only things I would add is better nulability checks, add stack traces by default for errors, and exhaustive checks for sum types. Other than that, it does everything I want.
Linters such as https://golangci-lint.run will do this for you.
In development: https://github.com/uber-go/nilaway
https://en.wikipedia.org/wiki/Go!_(programming_language)#Con...
The Go codebases look all alike. Not only the language has really few primitives but also the code conventions enforced by standard library, gofmt, and golangci-lint implies that the structure of code bases are very similar.
Many language communities can't even agree on the build tooling.
Consider adding a pre-commit hook if you are allowed to.
I would not use Golang for a big codebase with lots of business logic. Golang has not made a dent in Java usage at big companies, no large company is going to try replacing their Java codebases with Golang because there's no benefit, Java is almost as fast as Golang and has classes and actually has a richer set of concurrency primitives.
I think go needs some more functional aspects, like iterators and result type/pattern matching.
Go’s lack of inheritance is one of its bolder decisions and I think has been proven entirely correct in use.
Instead of the incidental complexity encouraged by pointless inheritance hierarchies we go back to structure which bundle data and behaviour and can compose them instead.
Favouring composition over inheritance is not a new idea nor did it come from the authors of Go.
Also the author of Java (Gosling) disagrees with you.
https://www.infoworld.com/article/2160788/why-extends-is-evi...
Structs and interfaces replace classes just fine.
Reuse is really very easy and I use it for several monoliths currently. Have you tried any of the things you’re talking about with go?
There may be no honor amongst thieves but there is honor amongst langdevs, and when they did Go! dirty, Google made clear which one they are.
Status changed to Unfortunate
PL naming code is:
1. Whoever uses the name first, has claim to the name. Using the name first is measured by: when was the spec published, or when is the first repo commit.
2. A name can be reused IFF the author has abandoned the original project. Usually there's a grace period depending on how long the project was under development. But if a project is abandoned then there's nothing to stop someone from picking up the name.
3. Under no circumstances should a PL dev step on the name of a currently active PL project. If that happens, it's up to the most recently named project to change their name, not the older project even if the newer project has money behind it.
4. Language names with historical notoriety are essentially "retired" despite not being actively developed anymore.
All of this is reasonable, because the PL namespace is still largely unsaturated*. There are plenty of one syllable English words that are out there for grabs. All sorts of animals, verbs, nouns, and human names that are common for PLs are there for the taking. There's no reason to step on someone else's work just because there's some tie in with your company's branding.
So it's pretty bottom basement behavior for luminaries like Ken Thompson and Rob Pike to cosign Google throwing around their weight to step on another dev's work, and then say essentially "make me" when asked to stop.
* This of course does not apply to the single-letter langs, but even still, that namespace doesn't really have 25 langs under active development.
Moreover the author of Go! personally requested that Google not step on his life's work. The man had dedicated a decade and authored a book and several papers on the topic, so it wasn't a close call. Additionally C# built on C++ which built on C. Go had no relationship to Go! at all. Homage and extension are one thing, but Go was not that.
A policy of "do no evil" required Google to acquiesce. Instead they told him to pound sand.
(Ignore the "Last Update: 2013-09-06" on the project page - that's the date that SourceForge performed an automatic migration. Any real activity on the project seems to have petered out around 2002, with one final file released in 2003.)
Due diligence would have revealed the Go! project in a standard literature review.
I can and will fault the developers because even if they had overlooked it, they were explicitly asked, and declined to do so because they determined (themselves) there would be no confusion. So it wasn't that they overlooked it, or ignored it, they decided they were asked directly and responded "we don't care".
If the reasoning boils down to "We can do this because you are small and we are big" I cannot support that.
Instead of “int x”
You have “var x int”
Which obscures the type, making it harder to read the code. The only justification is that 16 years ago, some guy thought he was being clever. For 99.99% of code, it’s a worse syntax. Nobody does eight levels of pointer redirection in typical everyday code.
var
foo: char;
Go was developed by many of the minds behind C, and inertia would have led them to C-style declaration. I don't know if they've ever told anybody why they went with the Pascal style, but I would bet money on the fact that Pascal-style declarations are simply easier and faster for computers to parse. And it doesn't just help with compile speed, it also makes syntax highlighting far more reliable and speeds up tooling.Sure, it's initially kind of annoying if you're used to the C style of type before identifier, but it's something you can quickly get to grips with. And as time has gone on, it seems like a style that a lot of modern languages have adopted. Off the top of my head, I think this style is in TypeScript, Python type hints, Go, Rust, Nim, Zig, and Odin. I asked Claude for a few more examples and apparently it's also used by Kotlin, Swift, and various flavors of ML and Haskell.
But hey, if you're still a fan of type before variable, PHP has your back.
class User {
public int $id;
public ?string $name;
public function __construct(int $id, ?string $name) {
$this->id = $id;
$this->name = $name;
}
}I don’t know if this is the reason but Robert Griesemer, one of the three original guys, comes from a Pascal/Modula background.
And in the process makes it significantly harder for human eyes to find the boundary between identifier and type.
You can write
var x = 5
how would that work if the type had to be first? Languages that added inference later tend to have “auto” as the type which looks terrible.
I remember making a little web app and seeing the type errors pop up magically in all he right places where I missed things in my structs. It was a life-changing experience.
Proponents say it has nothing under the hood. I see under-the-hood-magic happen every time.
1) The arrays append is one example. Try removing an element from an array - you must rely on some magic and awkward syntax, and there's no clear explanation what actually happens under the hood (all docs just show you that a slice is a pointer to a piece of vector).
2) enums creation is just nonsense
3) To make matters worse, at work we have a linter that forbids merging a branch if you a) don't do if err != nil for every case b) have >20 for & if/else clauses. This makes you split functions in many pieces, turning your code into enterprise Java.
It feels like, to implement same things, Go is 2x slower than in Rust.
On the positive side,
* interfaces are simpler, without some stricter Rust's limitations; the only problem with them is that in the using code, you can't tell one from a struct
* it's really fast to pick up, I needed just couple of days to see examples and start coding stuff.
I think Go would have been great with
* proper enums (I'll be fine if they have no wrapped data)
* sensible arrays & slices, without any magic and awkward syntax
* iterators
* result unwrapping shorthands
It has enums (sum type), tuple, built-in Set[T], and good Iterator methods. It has very nice type inferred lambda function (heavily inspired by the swift syntax)... lots of good stuff!
It has proper enums. Granted, it lacks an enum keyword, which seems to trip up many.
Perhaps what you are actually looking for is sum types? Given that you mentioned Rust, which weirdly[1] uses the enum keyword for sum types, this seems likely. Go does indeed lack that. Sum types are not enums, though.
> sensible arrays & slices, without any magic and awkward syntax
Its arrays and slices are exactly the same as how you would find it in C. So it is true that confuses many coming from languages that wrap them in incredible amounts of magic, but the issue you point to here is actually a lack of magic. Any improvements to help those who are accustomed to magic would require adding magic, not taking it away.
> iterators
Is there something about them that you find lacking? They don't seem really any different than iterators in other languages that I can see, although I'll grant you that the anonymous function pattern is a bit unconventional. It is fine, though.
> result unwrapping shorthands
Go wants to add this, and has been trying to for years, but nobody has explained how to do it sensibly. There are all kinds of surface solutions that get 50% of the way there, but nobody wants to tackle the other 50%. You can't force someone to roll up their sleeves, I guess.
[1] Rust uses enums to generate the sum type tag as an implementation detail, so its not quite as weird as it originally seems, but still rather strange that it would name it based on an effectively hidden implementation detail instead of naming it by what the user is actually trying to accomplish. Most likely it started with proper enums and then realized that sum types would be better instead and never thought to change the keyword to go along with that change.
But then again Swift did the same thing, so who knows? To be fair, its "enums" can degrade to proper enums in order to be compatible with Objective-C, so while not a very good reason, at least you can maybe find some kind of understanding in their thinking in that case. Rust, though...
Well, then they look awkward and have give a feel like it's a syntax abuse.
> Its arrays and slices are exactly the same as how you would do it in C. So while it is true that trips up many coming from languages that wrap them in incredible amounts of magic, but the issue you point to here is actually a lack of magic.
In Rust, I see exactly what I work with -- a proper vector, material thing, or a slice, which is a view into a vector. Also, a slice in Rust is always contiguous, it starts from element a and finishes at element b. I can remove an arbitrary element from a middle of a vector, but slice is read-only, and I simply can't. I can push (append) only to a vector. I can insert in the middle of a vector -- and the doc warns me that it'll need to shift every element after it forward. There's just zero magic.
In Go instead, how do I insert an element in the middle of an array? I see suggestions like `myarray[:123] + []MyType{my_element} + myarray[123:]`. (Removing is like myarray[:123] + myarray[124:]`.)
What do I deal in this code with, and what do I get afterwards? Is this a sophisticated slice that keeps 3 views, 2 to myarray and 1 to the anonymous one?
The docs on the internet suggest that slices in go are exactly like in Rust, a contiguous sequence of array's elements. If so, in my example of inserting (as well as when deleting), there must be a lot happening under the hood.
So nothing to worry about?
> how do I insert an element in the middle of an array?
Same as in C. If the array allocation is large enough, you can move the right hand side to the next memory location, and then replace the middle value.
Something like:
replaceWith := 3
replaceAt := 2
array := [5]int{1, 2, 4, 5}
size := 4
for i := size; i > replaceAt; i-- {
array[i] = array[i-1]
}
array[replaceAt] = replaceWith
fmt.Println(array) // Output: [1 2 3 4 5]
If the array is not large enough, well, you are out of luck. Just like C, arrays must be allocated with a fixed size defined at compile time.> The docs on the internet suggest that slices in go are exactly like in Rust, a contiguous sequence of array's elements.
They're exactly like how you'd implement a slice in C:
struct slice {
void *ptr;
size_t len;
size_t cap;
};
The only thing Go really adds, aside from making slice a built-in type, that you wouldn't find in C is the [:] syntax.Which isn't exactly the same as Rust. Technically, a Rust slice looks something like:
struct slice {
void *ptr;
size_t len;
};
There is some obvious overlap, of course. It still has to run on the same computer at the end of the day. But there is enough magic in Rust to hide the details that I think that you lose the nuance in that description. Go, on the other hand, picks up the exact same patterns one uses in C. So if you understand how you'd do it in C, you understand how you'd do it in Go.Of course, that does mean operating a bit lower level than some developers are used to. Go favours making expensive operations obvious so that is a tradeoff it is willing to make, but regardless if it were to make it more familiar to developers coming from the land of magic it stands that it would require more magic, not less.
(with the caveat that anything else sharing `a` will be mangled, obvs.)
I don't follow. Information isn't agreeable or disagreeable, it just is.
> And I was right that you can't just concatenate different slices
That's right. You would have to physically move the capacitors in your RAM around (while remaining powered!) in order to do that. Given the limits of our current understanding of science, that's impossible.
> hence Go has to do a lot of work under the hood to do that.
Do what? You can't actually do that. It cannot be done at the hardware level. There is nothing a programming language can do to enable it.
All a programming language can do is what we earlier demonstrated for arrays, or as slices allow dynamic allocation, if the original slice is not large enough you can also copy smaller slices into a new slice using a similar technique to the for loop above.
Go does offer a copy function and an append function that do the same kind of thing as the for loop above so you do not have to write the loop yourself every time. I guess that's what you think is magic? If you are suggesting that calling a function is magic, well, uh... You're not going to like this whole modern programming thing. Even Rust has functions, I'm afraid.
The Go standard library also provides a function for inserting into the middle of a slice, but, again, that's just a plain old boring function that adds some conditional logic around the use of the append and copy functions. It is really no different to how you'd write the code yourself. So, unless function are still deemed magic...
I’m guessing the go language design went too far into “simplicity” at the expense of reasonableness.
For example, we can make a “simpler” language by not supporting multiplication, just use addition and write your own!
It could have been a builtin function, I suppose, but why not place it in the standard library? It's not a foundational operation. If you look at the implementation, you'll notice it simply rolls up several foundation operations into one function. That is exactly the kind of thing you'd expect to find in a standard library.
>3) To make matters worse, at work we have a linter that forbids merging a branch if you a) don't do if err != nil for every case b) have >20 for & if/else clauses. This makes you split functions in many pieces, turning your code into enterprise Java.
That is not a problem with Go.
It has iterators - https://pkg.go.dev/iter.
> It lacks simple things like check if a key exists in a map.
What? `value, keyExists := myMap[someKey]`
> Try removing an element from an array - you must rely on some magic and awkward syntax, and there's no clear explanation what actually happens under the hood (all docs just show you that a slice is a pointer to a piece of vector).
First of all, if you're removing elements from the middle of an array, you're using the wrong data structure 99% of the time. If you're doing that in a loop, you're hitting degenerate performance.
Second, https://pkg.go.dev/slices#Delete
If I don't need the value, I have to do awkward tricks with this construct. like `if _, key_exists := my_may[key]; key_exists { ... }`.
Also, you can do `value := myMap[someKey]`, and it will just return a value or nil.
Also, if the map has arrays as elements, it will magically create one, like Python's defaultdict.
This construct (assigning from map subscript) is pure magic, despite all the claims, that there's none in Golang.
...And also: I guess the idea was to make the language minimal and easy to learn, hence primitives have no methods on them. But... after all OOP in limited form is there in Golang, exactly like in Rust. And I don't see the point why custom structs do have methods, and it's easier to use, but basic ones don't, and you have to go import packages.
Not that it's wrong. But it's not easier at all, and learning curve just moves to another place.
Err, no Go doesn't do that. No insertion happens unless you explicitly assign to the key.
my_map := make(map[int32][]int64)
val := my_map[123]
val = append(val, 456)
my_map[123] = val
fmt.Println(my_map)
prints `map[123:[456]]`I guess it's convenient compared to Rust's strict approach with entry API. But also, what I found is that golang's subscript returns nil in one case: if the value type is a nested map.
my_map := make(map[int32]map[int32]int64)
val := my_map[123]
val[456] = 789
my_map[123] = val
fmt.Println(my_map)
output: panic: assignment to entry in nil mapnil is equivalent to the empty array, which is why the rest of the code works as it does.
val, ok := my_map[123]
if ok {
...
}
https://go.dev/blog/maps#working-with-mapsHard disagree. Go has its sharp corners, but they don’t even approach the complexity of the borrow checker of Rust alone, let alone all of the other complexity of the Rust ecosystem.
It might if your map is a `map[typeA]*typeB` but it definitely won't return a `nil` if your map is anything like `map[typeA]typeC` (where `typeC` is non-nillable; i.e. int, float, string, bool, rune, byte, time.Time, etc.) - you'll get a compile error: "mismatched types typeC and untyped nil".
Sorry but that’s not categorically true. Rather it’s highly scale-dependent. 90% of the slices in a typical code base will be 10s of elements long, in which case the memory overhead and mutation overhead are comparable to, say, a map. Oh and also it’s ordered and (in the above case) can fit within a cache line.
Umm..in Java you won't have to split functions here. Maybe you should study some modern Java ?
p.s. upvoted, because some mob came and downvoted those who replied to me.
Other random things I hate:
- first element in a struct, if unnamed, acts like extending a struct;
- private/public fields of method based on capitalisation (it makes json mapping to a struct have so much boilerplate);
- default json lib being so inept with collections: an empty slice is serialised as null/absent (empty list is not absence of a list, WTF, but the new json lib promises to fix that json crap);
- error type being special, and not working well with chanels;
- lambda syntax is verbose;
- panics (especially the ones in libs);
- using internal proxy in companies for packages download is very fiddly, and sucks.
But, the tooling is pretty good and fast, I won’t lie. The language won’t win beauty contests for sure, but it mostly does the job. Still weak at building http servers (limited http server libs with good default headers, very limited openapi spec support).
I’m not 100% sure what you’re referring to here. Struct embedding maybe? (FWIW struct embedding is not limited to the first field in a struct, hence my confusion)
> - error type being special, and not working well with chanels;
I don’t think the error type is special? Do you mean that it is the only interface implicitly defined in each package?
> - using internal proxy in companies for packages download is very fiddly, and sucks.
Yes this one is annoying. I ended up writing an entire go module proxy just so that it works with bearer tokens. It’s crazy that Go only supports http basic auth for proxy authentication in 2025.
what?
From what I’ve experienced, if you need any fine-grained control over your data or allocations, precision on the type level, expressing nontrivial algorithms, Go is just too clumsy.
The more I read about how people use Go today and what issues people still have, the more I’m happy I picked Rust for almost everything. I even find it much more productive to write scripts in Rust than in Python or shell script. You just get it right very quickly and you don’t need to care about the idiosyncrasies of ancient tech debt that would otherwise creep into your new projects. And of course the outcome is way more maintainable.
Not saying that Rust hadn’t had its own warts, but most of them are made explicit. This is perhaps what I appreciate the most.
Intuitively, however, I still notice myself creating a new Python or shell script file when I need something quick, but then something doesn’t really work well the moment the logic gets a bit more complex, and I need to backtrack and refactor. With Rust, this hasn’t been an issue in my experience.
And intuitively, I still tend to think in Java terms when designing. It’s funny how it sticks for so long. And when writing some Java, I miss Go’s use-site interfaces and TypeScript’s structural typing, while I miss nominal typing in TypeScript. It’s just maybe that you get used to workarounds and idiosyncrasies in some system and then carry them over to another paradigms.
I do like Go’s value propositions, and lots of its warts have been sorted out, but I’m just not as productive in it for my use cases as I am with Rust. It just checks way more boxes with me.
It is so weird that they still claim this after they have made the the semantic change for 3-clause for-loop in Go 1.22.
When a Go module is upgraded from 1.21- to 1.22+, there are some potential breaking cases which are hard to detect in time. https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...
Go toolchain 1.22 broke compatibility for sure. Even the core team admit it. https://go101.org/bugs/go-build-directive-not-work.html
And when running go scripts without go.mod files, the v1.22 toolchain doesn't respect the "//go:build go1.xx" directives: https://go101.org/bugs/go-build-directive-not-work.html
And consider that some people run go scripts even without the "//go:build go1.xx" directives ... (Please don't refute me. The Go toolchain allows this and never warns on this.)
Maybe by 18, or 21, the maturity finally settles in.
> With gopls v0.18.0, we began exploring automatic code modernizers. As Go evolves, every release brings new capabilities and new idioms; new and better ways to do things that Go programmers have been finding other ways to do. Go stands by its compatibility promise—the old way will continue to work in perpetuity—but nevertheless this creates a bifurcation between old idioms and new idioms. Modernizers are static analysis tools that recognize old idioms and suggest faster, more readable, more secure, more modern replacements, and do so with push-button reliability. What gofmt did for stylistic consistency, we hope modernizers can do for idiomatic consistency.
Modernizers seem like a way make Large-Scale Changes (LSCs) more available to the masses. Google has internal tooling to support them [1], but now Go users get a limited form of opt-in LSC support whenever modernizers make a suggestion.
This combined with the ease of building CLI programs has been an absolute godsend in the past when I've had to quickly spin up CLI tools which use business logic code to fix things.
The problem is that many projects still pander to inferior 1980s-era `make` implementations, and as such rely heavily on the abominations that are autotools and cmake.
If you are distributing source, you distribute everything. Then, it only needs a compiler and libc. That vendored package is tested, and it works on your platform, so there's no guesswork.
Even in that specific niche I find using a programmatically generated ninja file to be a far superior experience to GNU make.
Let’s be real. C/++ has nothing even approaching a sane way to do builds. It is just degrees from slightly annoying to full on dumpster fire.
As a counter example, it seems like e.g. c++ is mostly concerned about making things possible and rarely about easy.
But yes, I worked - mainly Java (back then) with GWT, some Python, Sawzall, R, some other internal langs.
Go sees itself more as a total dev environment than just a language. There's integrated build tooling, package management, toolchain management, mono repo tools, testing, fuzzing, coverage, documentation, formatting, code analysis tools, performance tools...everything integrated in a single binary and it doesn't feel bloated at all.
You see a lot of modern runtimes and languages have learned from Go. Deno, Bun and even Rust took a lot of their cues from Go. It's understood now that you need a lot more than just a compiler/runtime to be useful. In languages that don't have that kind of tooling the community is trying to make them more Go-like, for example `uv` for Python.
Getting started in a Go project is ridiculously easy because of that integrated approach.
The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
I get that the feelz are nice and all (just go $subcmd) but.. come on.
I mean, not exactly. Rust (or rather Cargo) requires you to declare binaries in your Cargo.toml, for example. It also, AIUI, requires a specific source layout - binaries need to be named `main.rs` or be in `src/bin`. It's a lot more ceremony and it has actively annoyed me whenever I tried out Rust.
> The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
Again, no, it is not trivial. Using make requires you to write a Makefile. Using shell requires you to write a shell script.
I'm not saying any of this is prohibitive - or even that they should convince anyone to use Go - but it is just not true to say that other languages make this just as easy as Go.
It does not. Those are the defaults. You can configure something else if you wish. Most people just don’t bother, because there’s not really advantages to breaking with default locations 99% of the time.
You can't do that with python for instance. First, you need a python interpreter on the target machine, and on top of that you need the correct version of the interpreter. If yours is too old or not old enough, things might break. And then, you need to install all the dependencies. The correct version of each, as well. And they might not exist on your system, or conflict with some other lib you have on your target machine.
Same problem with any other interpreted language, including Java and C# obviously.
C/C++ dependency management is a nightmare too.
Rust is slightly better, but there was no production-ready rust 16 years ago (or even 10 years ago).
I agree that static linking is great and that python sucks but I was trying to say I can, very easily, mkdir new-py-program/app.py and stick __main__ in it or mkdir new-perl-program/app.pl or mkdir my-new-c-file/main.c etc.
For 2/3 of the above I can even make easy/single executable files go-style.
I don't understand your comment on magic comments. You don't need them to cross-compile a program. I was already doing that routinely 10 years ago. All I needed is a `GOOS=LINUX GOARCH=386 go build myprog && scp myprog myserver:`
"Man I love Go, it's so simple, plenty fast, really easy to pick up, read, and write. I really love that it doesn't have dozens of esoteric features for my colleagues to big brain into the codebase"
"Oh yeah? Well Go sucks, it doesn't have dozens of esoteric features for me to big brain into the codebase"
Repeat
And I am wondering if Rust would be a good addition. Or rather go with Typescript to complement Python and Go.
tschellenbach•2mo ago
go is amazing. switches from python to go 7 years ago. It's the reason our startup did well
thunderbong•2mo ago
https://stream-wiki.notion.site/Stream-Go-10-Week-Backend-En...
zerr•2mo ago
millerm•2mo ago
arccy•2mo ago
nowadays devs are less inclined to pump out crappy code that ends up with some ops guy having to wake up in the middle of the night