Are there any real packages out there using these techniques?
I was unclear, I'm afraid. You can reorder the type parameters, it just changes which of them you need to specify: https://go.dev/play/p/oDIFl3fZiPl
The point is that you can only leave off elements from the end of the list, to have them automatically inferred.
> Are there any real packages out there using these techniques?
I think so far, the usage of generics for containers in Go is still relatively sparse, in public code. I think in part that is because the documentation of how to do that is relatively sparse. That is part of the motivation for the post, to have a bit of somewhat official documentation for these things, so they become more widely known.
The standard library is just starting to add generic containers: https://github.com/golang/go/issues/69559 And part of that is discussing how we want to do things like this: https://github.com/golang/go/issues/70471
That being said, I have used the pointer receiver thing in my dayjob. One example is protobuf. We have a generic helper to set a protobuf enum from the environment. Because of how the API was designed, that required a pointer receiver constraint.
The article mentions using the function version to implement all others, but also that the method version would be optimized better.
Would the compiler be able to inline MethodTree's compare even though it's passed in as a function variable to node.insert?
My larger point though, is that with the `func` field the compiler can't optimize things even in principle. A user could always reassign this field (if nothing else using `*t = *new(FuncTree)`). So the compiler has to treat it as a dynamic call categorically. If the `func` is passed as a function, then at least in principle, it can prove that this function can't get modified during the call so can make optimization decisions based on what is being passed to it. For example, even without inlining, a future compiler might decide to compile two versions of `node.insert`, one with general dynamic calls and one specific one for a specific static function.
My philosophy when it comes to API decisions that impact performance is, not to make them too dependent on what the compiler is doing today, but just to take care there is enough information there, that the compiler can do an optimization in principle - which means it either will do it today, or we can make it smarter in the future, if it becomes a problem.
Coming from C#, whose generics are first class, I struggled to obtain any real value from Go's generics. It's not possible to execute on ideas that fit nicely in your head, and you instead end up fighting tooth and nail to wrangle what feels like an afterthought into something concrete that fits in your head.
Generics works well as a replacement for liberally using interface{} everywhere, making programs more readable, but as class and interface level I tend to avoid it as I find I don't really understand what is going on. I just needed it to work so I could move on
This kind of argument comes up every time a new C# language version rolls out - as if it's a breaking change and now everyone is going to be forced to refactor for it.
The only other way I can read this is in terms of wishing others would use tools in the way you prefer, which is clearly a waste of energy.
With Go, at least initially, it was an addition, not a core aspect of it - any code written in Go before generics will still work. Granted, I only have one real project but I never had a use case for generics - the built-in generic structures (map and arrays/slices) were enough for me. Maybe when you have code that works with the `interface{}` a lot (e.g. unknown JSON data) you'll have a use case for it.
[0] https://angelikalanger.com/GenericsFAQ/FAQSections/TypeParam...
I think in those cases, generics are specifically kind of pointless. Because you will inherently need to use `reflect` anyways. Generics are only helpful if you do know things about your types.
Generics are most useful for people who write special-purpose data structures. And hence for people who need such special-purpose data structures but don't want to implement them themself. The prototypical example is a lock-free map, which you only need, if you really need to solve performance problems and which specific kind of lock-free map you need depends very heavily on your workload. `sync.Map` is famously only really useful for mostly write-once caches, because that's what its optimized for.
The vast majority of people don't need such special-purpose data structures and can get by just fine with a `map` and a mutex. But Go has reach the level of adoption, where it can only really grow further, if it can also address the kinds of use-cases which do need something more specific.
Funnily enough, Java didn't use to have generics. I wonder whether it didn't feel like Java back then?
Welcome to civilization, golang. Were there ever any language developers with more hybris?
It is a technical solution for a people problem. It is better to guide and to mentor people in designing the right abstractions. What we should learn from this experiment is that this is the wrong approach.
Nah, it was just the wrong solution.
People problems are basically intractable in the grand scheme of things. Whenever you can turn a people problem into a technical problem, that's an opportunity for progress.
Imagine telling everyone to be a professional and being careful not to break our program when they edit the code? Sounds like a big people problem!
Instead, we give everyone their own copy to muck around with (instead of a shared folder), and we only allow changes to be integrated into the 'master copy', if they pass automated tests.
A good manager and really motivated and professional workers can help cope with people problems. But there's a limit to their ability. So the more we can offload to technological solutions, the more 'professionalism' (for lack of a better word) we can spare for other task that aren't feasible to be solved via technology, yet.
And I agree that not all technical solutions work! You need to experiment, and make judgement calls.
People keep fixing the unfix-able rather than moving on. I see the same happening with Python.
One of the key benefits of Go, at least for me, was not having to think about any of this at all ever.
Whenever I touch generics, I find myself engrossed in the possibility of cleverly implementing something. Hours will pass as I try to solve the fun puzzle of how to do the thing using generics, rather than just solve the problem at hand.
I imagine that people who prefer code-generation just like the idea of it having a higher skill/investment floor to add it to a project so most projects instinctively avoid it.
While people who prefer generics jump at it even when it is not necessary or doesn't bring a lot of benefits.
But those are human problems, not so much shortcomings of those two techniques themselves.
Go's Generics are a crippled implementation - they don't really deserve the feature title of 'generics'. (Its like saying you support regex, but don't support groups and repeat operators and you can only match them to special types of strings.)
The main difference between Go's generics and C++ templates (and where some of the restrictions come from) is that Go insists that you can type-check both the body of a generic function and the call to it, without one having to know about the other. My understanding is, that with C++ templates (even including concepts), the type checking can only happen at the call-site, because you need to know the actual type arguments used, regardless of what the constraints might say.
And this decision leads to most of the complaints I've heard about C++ generics. The long compile times, the verbose error messages and the hard to debug type-errors.
So, if you prefer C++ templates, that's fair enough. But the limitations are there to address complaints many other people had about C++ templates specifically. And that seems a reasonable decision to me, as well.
Look at Haskell type classes or Rust's traits for some classic examples of how to 'type' your generics. (And compare to what Go and C++ are doing.)
"Compile time" is not the right distinction. This is about "instantiation time". Go's implementation specifically allows to type-check the body and the call separately. That is, if you import a third-party package and call a generic function, all the type checker needs to look at to prove correctness is the signature of the function. It can ignore the body.
This is especially relevant, if you call a generic function from a generic function. For C++, proving that such a call is correct is, in general, NP-complete (it directly maps to the SAT problem, you need to prove that every solution to one arbitrary boolean formula satisfies a different boolean formula). So the designers made the conscious decision to just not do that, instead delaying that check to the point at which the concrete type used to instantiate the generic function is known (because checking that a specific assignment satisfies a boolean formula is trivial). But that also means that you have to (recursively) type-check a generic function again and again for every type argument provided, which can drive up compilation time.
A demonstration is this program, which makes gcc consume functionally infinite amount of memory and time: https://godbolt.org/z/crK89TW9G (clang is a little bit more clever, but can ultimately be defeated using a similar mechanism).
Avoiding these problems is a specific cause for a lot of the limitations with Go's generics.
> Look at Haskell type classes or Rust's traits for some classic examples of how to 'type' your generics. (And compare to what Go and C++ are doing.)
Yes, those are a different beasts altogether and the differences between what Go is doing and what Haskell and Rust are doing requires different explanations.
Though it's illustrative, because it turns out Rust also intentionally limited their generics implementation, to solve the kinds of performance problems Go is worried about. Specifically, Rust has the concept of "Dyn compatibility" (formerly "Object safety") which exists because otherwise Rusts goal of zero-cost abstractions would be broken. Haskell doesn't have this problem and will happily allow you to use the less efficient but more powerful types.
(All of this should have the caveat that I'm not an expert in or even a user of any of these languages. It's half-knowledge and I might be wrong or things might have changed since I last looked)
https://groups.google.com/g/golang-nuts/c/hJHCAaiL0so/m/kG3B...
>Syntax highlighting is juvenile. When I was a child, I was taught arithmetic using colored rods (http://en.wikipedia.org/wiki/Cuisenaire_rods). I grew up and today I use monochromatic numerals.
The language creator really hates it (and most modern editor tooling).
Paraphrasing, but if you need syntax highlighting to comprehend code, maybe your code is too complicated.
Not choosing to use syntax highlighting is just wrong on every level. It has exactly zero drawbacks.
But this is completely relevant to the person reading. It may be for you easier with highlighting but someone else it may not be
Syntax highlighting studies usually don't report on whether some subjects perform worse with syntax highlighting - usually only that they as a group perform better. But even with that evidence, it should be obvious that syntax highlighting should be either on for everyone, or on initially and off as an option for the rare individual.
on a more serious note: somehow nature choose to let us see colors, and this sense has been immensely useful to our existence and pleasure. maybe go could learn a thing or two from nature?
Might release an extension just to spite him
Ultimately he's fine with _some_ syntax highlighting, especially the kind that uses whitespace to highlight parts of the syntax, as evidenced by the existence of `go fmt`. He just hasn't taken into consideration that colour is just one typographical tool among many, including the use of whitespace, as well as italics, bold, size, typeface, etc. Switching inks has been somewhat tedious in printing, but these days most publications seem to support it just fine, and obsessive note-takers also use various pens and highlighters in different colours. For the rest of us it's mostly about the toil of switching pens that's holding us back I think, rather than some real preference for monochromatic notes. We generally have eyes that can discern colours and brains that can process that signal in parallel to other stuff, which along with our innate selective attention means we can filter out the background or have our attention drawn to stuff like red lights. Intentionally not using that built-in hardware feature is ultimately just making stuff harder on oneself with no particular benefit.
There's also some google groups quote from him about iterators which is also pretty funny given how modern Go uses them, but I don't have the link at hand. Several google groups quotes from the original language creators (not just Pike) tell an unfortunate story about how the language came to be the way it is.
It is pity that Go generics designer never expressed the intention to unify custom generics and built-in generics.
If they're going to be adding features to the language, albeit at a slower pace than Java/C#, what's the point really? On a long enough timeline Go is going to be indistinguishable from these more feature-rich languages.
C is a C-like language with a mostly frozen feature set. (If you want something less insane than C, there's also Pascal.)
C11 added generics, multi-threading, unicode support, static assertions. It broken compatibility with earlier versions by removing `gets` function.
C23 added `nullptr`, very fundamental change. typeof operator. auto keyword for type inference. Lots of breaking changes by introducing new keywords. Another breaking change is empty brackets `()` now mean as function taking no arguments.
So lots of new features and breaking changes with every new iteration. Thankfully, compilers support sane standards, so you can just use `-ansi` and live happy life, I guess...
What a shitshow. Seems like Go's designers didn't know about interfaces, generics, and iterators when decided to make a language...
Even worse, it was also promoted to keep backboard-compatibility seriously. But Go 1.22 broke the backward-compatibility so badly ([3] [1]). Despite this, the Go 1.22 release notes still claims "As always, the release maintains the Go 1 promise of compatibility".
[1]: https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...
[2]: https://go101.org/blog/2025-03-15-some-facts-about-iterators...
[3]: https://go101.org/bugs/go-build-directive-not-work.html
And the change makers even have no interests to fix the problems caused by the changes:
* https://github.com/golang/go/issues/66070#issuecomment-19816...
* https://github.com/golang/go/issues/71830
* https://github.com/spq/pkappa2/issues/238
Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little, while the upside of avoiding all this complexity is unmeasurable.
This is an extreme example and I hardly think anyone writing go code on a daily bases will need anything close to this. I haven't and I have not seen any lib that does anything remotely similar to that. To be honest, hardly anything beyond the stdlib will need to handle generics. They aren't widely used but quite useful when needed, which I think it is sweet-spot for generics.
I don't share the same animosity against generics. I like the recent language addition to the stdlib and am also waiting for them to add some sugar to reduce the boilerplate in error handling.
> Especially in the era of AI assistants, the downside of writing out explicit types and repetition matters very little
Yeah, let's design languages based on the capabilities of code assistance /s
> Yeah, let's design languages based on the capabilities of code assistance /s
I mean, that _is_ essentially the Go team's take these days, c.f. their previous blog post about error handling: https://go.dev/blog/error-syntax
> Writing repeated error checks can be tedious, but today’s IDEs provide powerful, even LLM-assisted code completion. Writing basic error checks is straightforward for these tools. The verbosity is most obvious when reading code, but tools might help here as well; for instance an IDE with a Go language setting could provide a toggle switch to hide error handling code.
Personally I expect that getting an LLM to write error handling and then have the IDE hide it sounds like a recipe for surprises, but I guess things work out differently if the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar.
- an LLM can help you write a boilerplate `if (err != nil) { return fmt.Errorf(...) }` that actually matches the conventions for code base you're in;
- your IDE can "hide" those additional lines of code to reduce cognitive load while reading code;
- it's actually useful that those "hidden" lines are there when you're debugging and want a place to add a breakpoint, or some additional logging, etc.
This is very different from saying you should have an LLM auto generate half a dozen indentical copies of sync.Map, container.List, my.Set or whatever.Tree based on the types you want to put in your container.
I'm actually fine with an LLM as a more powerful auto complete, that generates half a dozen lines of code at a time (or slightly tweaks code I paste) based on context.
I would have a problem with a LLM generating thousands of lines of code based on a prompt "this, but for ints" and then it's a fork of the original, with god knows how many subtle details lost, and a duplicated maintenance burden going forward.
It is not "essentially their take". It is one of the point (a weak one for what my opinion is worth) but far from their main point. Their main point from the text is the same point they always make in these cases:
> Coming up with a new syntax idea for error handling is cheap; hence the proliferation of a multitude of proposals from the community. Coming up with a good solution that holds up to scrutiny: not so much.
> the goal is to have hordes of the cheapest possible juniors kitted out with tools that let them produce the most amount of code per dollar
I share the same concern here. I don't have a solid opinion on how that will turn out but I'm not too optimistic.
No, this is not true for Go, at least for the current Go generics.
At runtime, Go generics can't be faster than generated repetitive code. Often, generic code is a little slower. Because sometimes values of type parameters are treated as interface values by the Go compiler, even if they are not.
> Handling interface{} was just painful.
Go generics are often helpless for this. Most use cases of interface{} are for reflection purpose and can't be re-implemented by Go generics. Some non-reflection use cases can't be also re-implemented by Go generics, because Go generics don't support type unions.
That’s not the main reason. You can have a library by author X that provides a container type Heap[T] and you can use it with your type T which is unknown by X and requires no coordination. If the proto-generic maps and slices did not exist in Go it would not be a useful language at all.
This pain point was glaring in Sort and Heap in std. The argument was whether the complexity was worth it and compile time speed could remain so fast. Even the improved expressivity isn’t obviously good (famously the removal of goto was good because it reduced expressivity).
Just stating the arguments, I still haven’t made up my mind whether these limited generics was the right call. Leaning yes, but it’s important to be humble. It takes a lot of time to evaluate second order effects.
> Especially in the era of AI assistants
As an aside, I really don’t appreciate this argument without extremely strong merits, which we can’t possibly have. Not everyone is using AI assistants, nor do people use it in the same way. But most importantly it changes very little since code is not bottlenecked by writing anyway. Code is read more often than written, and still needs to be reviewed, understood and maintained.
> Code is read more often than written, and still needs to be reviewed, understood and maintained.
Which takes us back to the points above. AI is really good at generating repetitive patterns, like plain types, or code that implements a certain interface. If you reduce the cost of creating the verbose code [at write time] we can all enjoy the benefit of reduced complexity [at read time] without resorting to generics.
Also not saying this as an absolute truth, it is more nuanced than that for sure. But in the big picture, generics reduces the amount of code you have to write, at the cost of increased layers of abstraction, and steering away from the simplicity that make Go popular in the first place. Overall I'm not convinced it was a net positive, yet.
Not obvious???? Go language designers and programmers are living in another world
tapirl•2d ago
Disagree. IMHO, this idea is the root cause of why Go generics is so complicate but also restrictive at the same time. And it introduces significant challenges in implementation and design: https://go101.org/generics/888-the-status-quo-of-go-custom-g...