Which can be out of date are often missing. Might as well use type-hints that can be statically checked.
In C++, a variable might be defined in a header or in a parent class somewhere else, and there's no indication of where it came from.
Doing otherwise is just asking for prod incidents.
I'd much rather just work in a statically typed language from the start.
To be clear, I'm not opposed to type hints. I use them everywhere, especially in function signatures. But the primary advantage to Python is speed (or at least perceived speed but that's a separate conversation). It is so popular specifically because you don't have to worry about type checking and can just move. Which is one of the many reasons it's great for prototypes and fucking terrible in production. You turn on strict type checking in a linter and all that goes away.
Worse, Python was not built with this workflow in mind. So with strict typing on, when types start to get complicated, you have to jump through all kinds of weird hoops to make the checker happy. When I'm writing code just to make a linter shut up something is seriously wrong.
Trying to ad typing to a dynamic language in my opinion is almost always a bad idea. Either do what Typescript did and write a language that compiles down to the dynamic one, or just leave it dynamic.
And if you want types just use a typed language. In a production setting, working with multiple developers, I would take literally almost any statically typed language over Python.
In my view it's always a mistake to try and tac static typing on top of a dynamic one. I think TS's approach is better than Python's, but still not nearly as good as just using a statically typed language.
The problem we are talking about in both Python and TS comes from the fact that they are (or compile down to) dynamic languages. These aren't issues in statically typed languages... because the code just won't compile it it's wrong and you don't have to worry about getting data from an untyped library.
I don't know a lot about Zod, but I believe the problem you are referring to is more about JavaScript then TS. JavaScript does a LOT of funky stuff at runtime, Python thank God actually enforces some sane type rules at runtime.
My point was not about how these two function at runtime. My point was that if you want to tac static typing onto a dynamic language, Typescripts approach is the better one, but even if can't fix the underlying issues with JS.
You could take a similar approach in Python. We could make a language called Tython, that is statically typed and then compiles down to Python. You eliminate an entire class of bugs at compile time, get a far more reliable experience then the current weirdness with gradual typing and linters, and you still get Pythons runtime type information to deal with things like interopt with existing Python code.
You would never have typing.TYPE_CHECKING to check if type checking is being done in TypeScript, for example, because type hints can't break Javascript code, something that can happen in Python when you have cyclic imports just to add types.
These systems are part of the core banking platform for a bank so I’d rather some initial developer friction over runtime incidents.
And I say initial friction because although developers are sometimes resistant to it initially, I’ve yet to meet one who doesn’t come to appreciate the benefits over the course of working on our system.
Different projects have different requirements, so YMMV but for the ones I’m working on type hints are an essential part of ensuring system reliability.
But it's a fair point. If you truly have no option it's better then absolutely nothing. I really wish people would stop writing mission critical production code in Python.
For example I work on a python codebase shared by 300+ engineers for a popular unicorn. Typing is an extremely important part of enforcing our contracts between teams within the same repository. For better or for worse, python will likely remain the primary language of the company stack.
Should the founder have chosen a better language during their pre-revenue days? Maybe, but at the same time I think the founder chose wisely -- they just needed something that was _quick_ (Django) and capable of slapping features / ecosystem packages on top of to get the job done.
For every successful company built on a shaky dynamic language, there's probably x10 more companies that failed on top of a perfect and scalable stack using static languages.
However for new projects I find that I'd much rather pick technologies that start me off with a sanity floor which is higher than Python's sanity ceiling. At this point I don't want to touch a dynamically typed language ever again.
Whatever the solution is, it doesn’t include giving up on Python typings.
Wouldn't that just be `object` in Python?
There was a proposal[3] for an unknown type in the Python typing repository, but it was rejected on the grounds that `object` is close enough.
[1]: https://mypy.readthedocs.io/en/stable/error_code_list.html#c...
[2]: https://mypy.readthedocs.io/en/stable/error_code_list.html#c...
It's "close enough" to a usable type system that it's worth using, but it's full of so many edge cases and so many situations where they decided that it would be easier if they forced programmers to try and make reality match the type system rather than the type system match reality.
No wonder a lot of people in the comments here say they don't use it...
You can live with the "close enough" if you're writing a brand new greenfield project and you prevent anyone from ever checking in code mypy doesn't like and also don't use any libraries that mypy doesn't like (and also don't make web requests to APIs that return dictionary data that mypy doesn't like)
Retrofitting an existing project however is like eating glass.
I believe Python's own documentation also recommends the shorthand syntax over `Union`. Linters like Pylint and Ruff also warn if you use the imported `Union`/`Optional` types. The latter even auto-fixes it for you by switching to the shorthand syntax.
I'm aware this is just a typo but since a lot of the Python I write is in connection with Airflow I'm now in search of a way to embrace duct typing.
If I have a function that takes an int, and I write down the requirement, why should a JIT have to learn independently of what I wrote down that the input is an int?
I get that it's this way because of how these languages evolved, but it doesn't have to stay this way.
The type hints proved to be useful on their own so the project moved past what was useful for that purpose, but a new JIT (such as the one the upcoming CPython 3.14 lays the groundwork for) could certainly use them.
The extra typing clarification in python makes the code harder to read. I liked python because it was easy to do something quickly and without that cognitive overhead. Type hints, and they feel like they're just hints, don't yield enough of a benefit for me to really embrace them yet.
Perhaps that's just because I don't use advanced features of IDEs. But then I am getting old :P
EDIT: also, this massively depends on what you're doing with the language! I don't have huge customer workloads to consider any longer..!
I use vanilla vim (no plugins) for my editor, and still consider type hints essential.
They don't. And cannot, for compatibility reasons. Aside from setting some dunders on certain objects (which are entirely irrelevant unless you're doing some crazy metaprogramming thing), type annotations have no effect on the code at runtime. The Python runtime will happily bytecode-compile and execute code with incorrect type annotations, and a type-checking tool really can't do anything to prevent that.
My understanding is that currently python can collect type data in test runs and use it to inform the jit during following executions
I'd forgotten about that. Now that you mention it, my understanding is that this is actually the plan.
It’s funny, because for me is quite the opposite: I find myself reading Python more easily when there are type annotations.
One caveat might be: for that to happen, I need to know that type checking is also in place, or else my brain dismissed annotations in that they could just be noise.
I guess this is why in Julia or Rust or C you have this stronger feeling that types are looking after you.
And Python always was rather strongly typed, so you anyway had to consider the types. Now you get notes. Which often do help.
It depends what you mean by "read". If you literally mean you're doing a weird Python poetry night then sure they're sort of "extra stuff" that gets in the way of your reading of `fib`.
But most people think of "reading code" and reading and understanding code, and in that case they definitely make it easier.
'Twas brillig: Adjective and the slithy toves: Noun
Did gyre: Verb and gimble: Verb in the wabe: NounIn my experience I have seen far too much Python code like
`def func(data, args, *kwargs)`
with no documentation and I have no clue wtf it's doing. Now I am basically all in on type hints (except cases where it's impossible like pandas).
Dudes it's literally just worse compilation with extra steps.
There's a world of difference between:
> I've been using a different type checker and I like it, you should try it
And
> I'd like to switch our project to a different compiler
The former makes for more nimble ecosystem.
Turns out they just didn't know any better?
Python has a great experience for a bunch of tasks and with typing you get the developer experience and reliability as well.
Python’s 3 traditional weak spots, which almost all statically-typed languages do better: performance, parallelism and deployment.
It is a great choice though for many problems where performance isn't critical (or you can hand the hard work off to a non-Python library like Numpy or Torch). Typing just makes it even better.
For any even medium sized project or anything where you work with other developers a statically typed language is always going to be better. We slapped a bunch of crap on Python to make it tolerable, but nothing more.
Yeah it's workable, and better than nothing. But it's not better than having an actual static type system.
1. It's optional. Even if you get your team on board you are inevitably going to have to work with libraries that don't use type hints
2. It's inconsistent, which makes sense given that it's tacked onto a language never intended for it.
3. I have seen some truly goofy shit written to make the linter happy in more complex situations.
I honestly think everything that's been done to try to make Python more sane outside outside scripting or small projects (and the same applies to JS and TS) are a net negative. Yes it has made those specific ecosystems better and more useful, but it's removed the incentive to move to better technology/languages actually made to do the job.
The comment about Typescript was really about JavaScript. It's a patch on top of JavaScript, which is a shit show and should have been replaced before it ended up forming the backbone of the internet.
Python, typed or otherwise, isn't good for anything past prototyping, piping a bunch of machine learning libraries together, or maybe a small project. The minute the project gets large or starts to move towards actual production software Python should be dropped.
WASM is obviously an option these days and well, but my understanding is that it's a long way from taking over.
But it's really valuable documentation! Knowing what types are expected and returned just by looking at a function signature is super useful.
https://old.reddit.com/r/Python/comments/10zdidm/why_type_hi...
Edit: Yes, one can sometimes go with Any, depending on the linter setup, but that's missing the point, isn't it?
That entire Reddit post is a clueless expert beginner rant about something they don't really understand, unfortunate that it's survived as long as it has or that anyone is taking it as any sort of authoritative argument just because it's long.
That's not the issue the reddit post is raising. The reddit post is pointing out that what a "type" is is not as simple as it looks. Particularly in a language like Python where user-defined types proliferate, and can add dunder methods that affect statements that involve built-in operations. "Just use Any" doesn't solve any of those problems.
> just use Any.
All the above said: not putting a type in at all is even easier than using Any, and is semantically equivalent.
But the entire post is built upon the premise that accepting all types is good API design. Which it isn't, at all.
Was Tim Peters also wrong way back in the day when he counseled Guido van Rossum to allow floats to be added to integers without a cast, like other popular languages?
My suggestion -- don't rely on magic methods.
So no e.g. numpy or torch then?
When I've used Python's type checkers, I have more the feeling that the goal is to create a new, typed subset of the language, that is less capable but also easier to apply types to. Then anything that falls outside that subset gets `Any` applied to it and that's good enough. The problem I find with that is that `Any` is incredibly infective - as soon as it shows up somewhere in a program, it's very difficult to prevent it from leaking all over the place, meaning you're often back in the same place you were before you added types, but now with the added nuisance of a bunch of types as documentation that you can't trust.
Regardless, none of that bears on the original `slow_add` example from the Reddit page. The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way. Because the rule is something like "anything that says it can be added according to the protocol — which in practical terms is probably any two roughly-numeric types except for the exceptions, and also most container types but only with other instances of the same type, and also some third-party things that represent more advanced mathematical constructs where it makes sense".
And saying "don't rely on magic methods" does precisely nothing about the fact that people want the + symbol in their code to work this way. It does suggest that `slow_add` is a bad thing to have in an API (although that was already fairly obvious). But in general you do get these issues cropping up.
Dynamic typing has its place, and many people really like it, myself included. Type inference (as in the Haskell family) solves the noise problem (for those who consider it a problem rather than something useful) and is elegant in itself, but just not the strictly superior thing that its advocates make it out to be. People still use Lisp family languages, and for good reason.
But maybe Steve Yegge would make the point better.
There's nothing pedantic about it. That's how Python works, and getting into the nuts and bolts of how Python works is precisely why the linked article makes type hinting appear so difficult.
> The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way.
As the post explores, your intuition is also incorrect. For example, as the author discovers in the process, addition via __add__/__radd__ is not addition in the algebraic field sense. There is no guarantee that adding types T + T will yield a T. Or that both operands are of the same type at all, as would be the case with "adding" a string and int. Or that A + B == B + A. We can't rely on intuition for type systems.
No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense. When language designers are revisiting the decision to use the `+` symbol to represent string concatenation, how many of them are thinking about algebraic fields, seriously?
And all of this is exactly why you can't just say that it's universally bad API design to "accept all types". Because the alternative may entail rejecting types for no good reason. Again, dynamically typed languages exist for a reason and have persisted for a reason (and Python in particular has claimed the market share it has for a reason) and are not just some strictly inferior thing.
Note, though, that that's not really the API design choice that's at stake here. Python will still throw an exception at runtime if you use the + operator between objects that don't support being added together. So the API design choice is between that error showing up as a runtime exception, vs. showing up as flagged by the type checker prior to runtime.
Or, to put it another way, the API design choice is whether or not to insist that your language provide explicit type definitions (or at least a way to express them) for every single interface it supports, even implicit ones like the + operator, and even given that user code can redefine such interfaces using magic methods. Python's API design choice is to not care, even with its type hinting system--i.e., to accept that there will be interface definitions that simply can't be captured using the type hinting system. I personally am fine with that choice, but it is a design choice that language users should be aware of.
Huh? There's no restriction in Python's type system that says `+` has to "make sense".
import requests
class Smoothie:
def __init__(self, fruits):
self.fruits = fruits
def __repr__(self):
return " and ".join(self.fruits) + " smoothie"
class Fruit:
def __init__(self, name):
self._name = name
def __add__(self, other):
if isinstance(other, Fruit):
return Smoothie([self._name, other._name])
return requests.get("https://google.com")
if __name__ == "__main__":
print(Fruit("banana") + Fruit("mango"))
print(Fruit("banana") + 123)
> banana and mango smoothie> <Response [200]>
So we have Fruit + Fruit = Smoothie. Overly cute, but sensible from a CS101 OOP definition and potentially code someone might encounter in the real world, and demonstrates how not all T + T -> T. And we have Fruit + number = requests.Response. Complete nonsense, but totally valid in Python. If you're writing a generic method `slow_add` that needs to support `a + b` for any two types -- yes, you have to support this nonsense.
There's no restriction in any language that all code has to make sense. You can write nonsense in any language. Sure, particular types of nonsense might be easier to write in Python than in some other languages, but nonsense is still nonsense.
And it's also nonsense to argue that an API designer has to support whatever nonsense a coder can dream up just because it's valid code in that language. The GP post was not talking about algebraic fields or mathematical definitions or what nonsense the language permits, but about API design. The basic issue is that Python's extremely dynamic nature makes some reasonable API designs basically inexpressible. That's just a tradeoff one has to accept when using Python. Every language has tradeoffs.
No, it doesn't. The desired type is known; it's "Addable" (i.e., "doesn't throw an exception when the built-in add operator is used"). The problem is expressing that in Python's type notation in a way that catches all edge cases.
> If you want to allow users to pass in any objects, try to add and fail at runtime
Which is not what the post author wants to do. They want to find a way to use Python's type notation to catch those errors with the type checker, so they don't happen at runtime.
> the entire post is built upon the premise that accepting all types is good API design
It is based on no such thing. I don't know where you're getting that from.
The mistake both you and the reddit posts' author make is treating the `+` operator the same as you would an interface method. Despite Python having __add__/__radd__ methods, this isn't true, nor is it true in many other programming languages. For example, Go doesn't have a way to express "can use the + operator" at all, and "can use comparison operators" is defined as an explicit union between built-in types.[0] In C# you could only do this as of .NET 7, which was released in Nov 2022[1] -- was the C# type system unusable for the 17 years prior, when it didn't support this scenario?
If this were any operation on `a` and `b` other than a built-in operator, such as `a.foo(b)`, it would be trivial to define a Protocol (which the author does in Step 4) and have everything work as expected. It's only because of misunderstanding of basic Python that the author continues to struggle for another 1000 words before concluding that type checking is bad. It's an extremely cherry-picked and unrealistic scenario either from someone who is clueless, or knows what they're doing and is intentionally being malicious in order to engagement bait.[2]
This isn't to say Python (or Go, or C#) has the best type system, and it certainly lacks compared to Rust which is a very valid complaint, but "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case, unsupported in many languages, that it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
[0] https://pkg.go.dev/cmp#Ordered
[1] https://learn.microsoft.com/en-us/dotnet/standard/generics/m...
[2] actually reading through the reddit comments, the author specifically says they were engagement baiting so... I guess they had enough Python knowledge to trick people into thinking type hinting was bad, fair enough!
In other words, you agree that the Python type hint system does not give you a good, built-in way to express the "Addable" type.
Which means you are contradicting your claims that the type the article wants to express is "unknown" and that the article is advocating using "Any" for this case. The type is not unknown--it's exactly what I said: "doesn't throw an exception when using the + operator". That type is just not expressible in Python's type hint system in the way that would be needed. And "Any" doesn't address this problem, because the article is not saying that every pair of objects should be addable.
> "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case
I don't see why. Addition is a very commonly used operation, and being able to have a type system that can express "this function takes two arguments that can be added using the addition operator" seems like something any type system that delivers the goods it claims to deliver ought to have.
> unsupported in many languages
Yes, which means many languages have type systems that claim to deliver things they can't actually deliver. They can mostly deliver them, but "mostly" isn't what advocates of using type systems in all programs claim. So I think the article is making a useful point about the limitations of type systems.
> it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
The article never says that either. You are attacking straw men.
If your comparison is Rust, sure, but you can't even express this in Java. No, Java's type system is not great, but it's a type system that's been used for approximately 500 trillion lines of production code powering critical systems and nobody has ever said "Java sucks because I can't express 'supports the + operator' as a generic type". (It sucks for many other reasons.)
Again, it is factually and objectively an esoteric and unusual case. Nobody in the real world is writing generics like this, only academics or people writing programming blogs about esoterica.
If your argument is that all type systems are bad or deficient, fine, but calling out Python for this when it has the exact same deficiency as basically every other mainstream language is asinine.
> The article never says that either. You are attacking straw men.
The article says "Turning even the simplest function that relied on Duck Typing into a Type Hinted function that is useful can be painfully difficult." The subterfuge is that this is not even remotely close to a simple function because the type being expressed, "supports the + operator", is not even remotely close to a simple type.
Sorry, but your unsupported opinion is not "factual and objective".
> If your argument is that all type systems are bad or deficient
I said no such thing, any more than the article did. Again you are attacking a straw man. (If you had said "limited in what they can express", I might buy that. But you didn't.)
I think I've said all I have to say in this subthread.
Again, this is an esoteric limitation from the perspective of writing code that runs working software, not a programming language theory perspective.
You have no idea, and nor does anyone else. But that's what you would need "factual and objective" evidence about to support the claim you made.
By your argument, anything that programming languages don't currently support, must be an "esoteric limitation" because billions if not trillions of lines of code have been written without it. Which would mean programming languages would never add new features at all. But it's certainly "factual and objective" that programming languages add new features all the time. Maybe this is another feature that at some point a language will add, and programmers will find it useful. You don't even seem to be considering such a possibility.
What the reddit post is demonstrating is that the Python type system is still too naive in many respects (and that there are implementation divergences in behavior). In other languages, this is a solved problem - and very ergonomic and safe.
Python’s type hints are in the second category.
But now that I'm coming back to it, I think that this might be a larger category than I first envisioned, including projects whose build/release processes very reliably include the generation+validation+publication of updated docs. That doesn't imply a specific language or release automation, just a strong track record of doc-accuracy linked to releases.
In other words, if a user can validate/regenerate the docs for a project, that gets it 9/10 points. The remaining point is the squishier "the first party docs are always available and well-validated for accuracy" stuff.
Nobody does that, though. Instead they all auto-publish their OpenAPI schemas through rickety-ass, fail-soft build systems to flaky, unmonitored CDNs. Then they get mad at users who tell them when their API docs don't match their running APIs.
Many old school python developers don't realize how important typing actually is. It's not just documentation. It can actually roughly reduce dev time by 50% and increase safety by roughly 2x.
There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
> Many old school python developers don't realize how important typing actually is.
I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
No, you're one of the old school python developers. Types don't hinder creativity, they augment it. The downside is the slight annoyance of updating a type definition and the run time definition vs. just updating the runtime definition.
Let me give you an example of how it hinders creativity.
Let's say you have a interface that is immensely complex. Many nested structures thousands of keys, and let's say you want to change the design by shifting 3 or 4 things around. Let's also say this interface is utilized by hundreds of other methods and functions.
When you move 3 or 4 things around in a complex interface you're going to break a subset of those hundreds of other methods or functions. You're not going to know where they break if you don't have type checking enabled. You're only going to know if you tediously check every single method/function OR if it crashes during runtime.
With a statically typed definition you can do that change and the type checker will identify EVERY single place where an additional change to the methods that use that type needs to be changed as well. This allows you to be creative and make any willy nilly changes you want because you are confident that ANY change will be caught by the type checker. This Speeds up creativity, while without it, you will be slowed down, and even afraid to make the breaking change.
You are basically the stereotype I described. An old school python developer. Likely one who got used to programming without types and now hasn't utilized types extensively enough to see the benefit.
>I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
This is true. You're it. You just don't know it. When I say these developers don't know I'm literally saying they think like you and believe the same things you believe BECAUSE they lack knowledge and have bad habits.
The habit thing is what causes the warped knowledge. You're actually slowed down by types because you're not used to it as you spent years coding in python without types so it's ingrained for you to test and think without types. Adding additional types becomes a bit of a initial overhead for these types of people because their programming style is so entrenched.
Once you get used to it and once you see that it's really just a slight additional effort, than you will get it. But it takes a bit of discipline and practice to get there.
No they dont. There is nothing about types that would make incremental develpment harder. They keep having the same benefits when being incremental.
Oh, please, this is either lack of imagination or lack of effort to think. You've never wanted to test a subset of a library halfway through a refactor?
I don't think it's a lack of curiosity from others. But it's more like fundamental lack of knowledge from you. Let's hear it. What is it are you actually talking about? Testing a subset of a library halfway though a refactor? How does a lack of types help with that?
My hunch is that the people who see no downsides whatsoever in static typing are those who mostly just consume APIs.
I'm not a consumer of APIs. I've done game programming, robotics, embedded system development (with C++ and rust), (web development frontend with react/without react, with jquery, with angurar, with typescript, with js, zod) (web development backend with golang, haskell, nodejs typescript, and lots and lots of python with many of the most popular frameworks with flask + sqlalchemy, django, FastApi + pydantic, )
I've done a lot. I can tell you. If you don't see how types outweigh untyped languages, you're a programmer with experience heavily weighed toward untyped programming. You don't have balanced experience to make a good judgement. Usually these people have a "data scientist" background. Data analyst or data scientist or machine learning engineers... etc. These guys start programing heavily in the python world WITHOUT types and they develop unbalanced opinions shaped by their initial styles of programming. If this describes you, then stop and think... I'm probably right.
I'd been programming for 20+ years and I genuinely couldn't think of any situations where I'd had a non-trivial bug that I could have avoided if I'd had a type checker - claims like "reduce dev time by 50%" didn't feel credible to me, so I stuck with my previous development habits.
Those habits involved a lot of work performed interactively first - using the Python terminal, Jupyter notebooks, the Firefox/Chrome developer tools console. Maybe that's why I never felt like types were saving me any time (and in fact were slowing me down).
Then I had my "they're just interactive documentation" realization and finally they started to click for me.
But if you aren't familiar with a project then dynamic typing makes it an order of magnitude harder to navigate and understand.
I tried to contribute some features to a couple of big projects - VSCode and Gitlab. VSCode, very easy. I could follow the flow trivially, just click stuff to go to it etc. Where abstract interfaces are used it's a little more annoying but overall wasn't hard and I have contributed a few features & fixes.
Gitlab, absolutely no chance. It's full of magically generated identifiers so even grepping doesn't work. If you find a method like `foo_bar` it's literally impossible to find where it is called without being familiar with the entire codebase (or asking someone who is) and therefore knowing that there's a text file somewhere called `foo.csv` that lists `bar` and the method name is generated from that (or whatever).
In VSCode it was literally right-click->find all references.
I have yet to succeed in modifying Gitlab at all.
I did contribute some features to gitlab-runner, but again that is written in Go so it is possible.
So in some cases those claims are not an exaggeration - static types take you from "I give up" to "not too hard".
Flip side of this is that I hate trying to read code written by teams relying heavily on such features, since typically zero time was spent on neatly organizing the code and naming things to make it actually readable (from top to bottom) or grep-able. Things are randomly spread out in tiny files over countless directories and it's a maze you stumble around just clicking identifiers to jump somewhere. Where something is rarely matter as the IDE will find it. I never develop any kind of mental image of that style of code and it completely rules out casually browsing the code using simpler tools.
Kind of like how you don't learn an area when you always use satnav as quickly as you do when you manually navigate with paper maps. But do you want to go back to paper maps? I don't.
Type annotations don’t double productivity. What does “increase safety by 2×” even mean? What metric are you tracking there?
In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
My own anecdotal metric. Isn't that obvious? The initial post was an anecdotal opinion as well. I don't see a problem here.
>In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
It's not just None. Imagine some highly complex object with nested values and you have some function like this:
def modify_direction(direction_object) -> ...
wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D? Most old school python devs literally have to find where modify_direction is called and they find this: def modify_data(data) -> ...
...
modify_direction(data.quat)
Ok then you have to find where modify data is called, and so on and so forth until you get to here: def combind_data(quat) -> ...
def create_quat() -> quat
And then boom you figure out what it does by actually reading all the complex quaternion math create_quat does.Absolutely insane. If I have a type, I can just look at the type to figure everything out... you can see how much faster it is.
Oh and get this. Let's say there's someone who feels euler angles are better. So he changes create_quat to create_euler. He modifies all the places create_quat is used (which is about 40 places) and he misses 3 or 4 places where it's called.
He then ships it to production. Boom The extra time debugging production when it crashes, ans also extra time tediously finding where create_quat was used. All of that could have been saved by a type checker.
I'm a big python guy. But I'm also big into haskell. So I know both the typing worlds and the untyped worlds really well. Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
WTF is “an anecdotal metric”‽ That just sounds like an evasive way to say “I want to make up numbers I can’t justify”.
> wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D?
This seems very domain-specific.
> Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
> If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
It's a metric (how much more productive he is), and anecdotal (base only on his experience). Pretty obvious I would have thought.
> This seems very domain-specific.
It was an example from one domain but all domains have types of things. Are you really trying to say that only 3D games specifically would benefit from static types?
> Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
Clueless senior then I guess? Honestly I don't know how you can have this much experience and still not come to the obvious conclusion. Perhaps you only write small scripts or solo projects where it's more feasible to get away without static types?
What would you say to someone who said "I have 25 years of experience reading books with punctuation and I think that punctuation is a waste of time. Just because you disagree with me doesn't mean I'm clueless."?
What I have to have scientific papers for every fucking opinion I have? The initial Parent post was an anecdotal opinion. Your post is an opinion. I can't have opinions here without citing a scientific paper that's 20 pages long and no is going to read but just blindly trust because it's "science"? Come on. What I'm saying is self evident to people who know. There are thousands of things like this in the world where people just know even though statistical proof hasn't been measured or established. For example eating horse shit everyday probably isn't healthy even though it there isn't SCIENCE that proves this action as unhealthy directly. Type checking is just one of those things.
OBVIOUSLY I think development is overall much better, much faster and much safer with types. I can't prove it with metrics, but I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
>This seems very domain-specific.
Domain specific? Basic orientation with quaternions and euler angles is specific to reality. Orientation and rotations exist in reality and there are thousands and thousands of domains that use it.
Also the example itself is generic. Replace euler angles and quats with vectors and polar coordinates. Or cats and dogs. Same shit.
>I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
The amount of years of experience is irrelevant. I know tons of developers with only 5 years of experience who are better than me and tons of developers with 25+ who are horrible.
I got 25 years as well. If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact. It's not an insult. It just means for a specific thing they don't have experience or knowledge which is typical. I'm sure there's tons of things where you could have more experience. Just not this topic.
If you have experience with static languages it likely isn't that extensive. You're likely more of a old school python guy who spend a ton of time programming without types.
No, but if you’re going to say things like “increase safety by roughly 2x” then if you can’t even identify the unit then you are misleading people.
It’s absolutely fine to have an opinion. It’s not fine to make numbers up.
> I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
You’re claiming that it’s “in the ballpark”, but what is “in the ballpark”? The problem is not one of accuracy, the problem is that it’s made up.
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
So when I talk about multipliers I have to have a unit? What is the unit of safety? I can't say something like 2x more safe? I just have to say more safe? What if I want to emphasize that it can DOUBLE safety?
Basically with your insane logic people can't talk about productivity or safety or multipliers at the same time because none of these concepts have units.
Look I told YOU it's anecdotal, EVERYONE can read it. You're no longer "deceived" and no one else is.
>Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
If you don't have the capacity to understand what I'm talking about without me specifying a unit than I'll make one up:
I call it safety units. The amount of errors you catch in production. That's my unit: 1 caught error in prod in a year. For Untyped languages let's say you catch about 20 errors a year. With types that goes down to 10.
>It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
What? and you think all opinions are equal and everyone has the freedom to have any opinion they want and no one can be right or wrong because everything is just an opinion? Do all opinions need to be fully respected even though it's insane?
Like my example, if you have the opinion that eating horse shit is healthy, I'm going to make a judgement call that your opinion is WRONG. Lack of Typing is one of these "opinions"
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
You think it’s impossible for anybody to have an informed opinion that disagrees with yours. You literally think yours is the only possible valid opinion. If that doesn’t set off big warning bells in your head, you are in dire need of a change in attitude.
This conversation is not productive, let’s end it.
I mean do you think we should have a fair and balanced discussion about the merits of child molestation and rape? We should respect other people's opinion and not tell them they are wrong if there opinion differs? That's what I think of your opinion. I think your opinion is utterly wrong, and I do think my opinion is the valid opinion.
Now that doesn't mean I disrespect your opinion. That doesn't mean your not allowed to have a different opinion. It just means I tell you straight up, you're wrong and you lack experience. You're free to disagree with that and tell me the exact same thing. I'm just blunt, and I welcome you to be just as blunt to me. Which you have.
The thing I don't like about you is that you turned it into discussion about opinions and the nature of holding opinions. Dude. Just talk about the topic. If you think I'm wrong. Tell me straight up. Talk about why I'm wrong. Don't talk about my character and in what manner I should formulate opinions and what I think are facts.
>This conversation is not productive, let’s end it.
I agree let's end it. But let's be utterly clear. YOU chose to end it with your actions by shifting the conversation into saying stuff like "you literally think yours is the only possible opinion." Bro. All you need to do is state why you think my opinion is garbage and prove it wrong. That's the direction of the conversation, you ended it by shifting it to a debate on my character.
Or have enough experience to have lived e.g. the J2EE and C++ template hells and see where this is going.
In general types outweigh no types EVEN with the above.
Obviously this post is still firmly in made up statistics land, but i agree with OP, in some cases they absolutely do.
New code written by yourself? No, probably not. But refactoring a hairy old enterprise codebase? Absolutely a 2×, 3× multiplier to productivity / time-to-correctness there.
> But it's really valuable documentation! Knowing what types are expected and returned just by looking at a function signature is super useful.
So ... you didn't have this realisation prior to using Python type hints? Not from any other language you used prior to Python?
Maybe its time you expanded your horizons, then. Try a few statically typed languages.
Even plain C gives you a level of confidence in deployed code that you will not get in Python, PHP or Javascript.
Ironically, the worst production C written in 2025 is almost guaranteed to be better than the average production Python, Javascript, etc.
The only people really choosing C in 2025 are those with a ton of experience under their belt, who are comfortable with the language and its footguns due to decades of experience.
IOW, those people with little experience are not choosing C, and those that do choose it have already, over decades, internalised patterns to mitigate many of the problems.
At the end of the day, in 2025, I'd still rather maintain a system written in a statically typed language than a system written in a dynamically typed language.
Experienced users of C can't be the only people who use it if the language is going to thrive. It's very bad for a language when the only ones who speak it are those who speak it well. The only way you get good C programmers is by cultivating bad C programmers, you can't have one without the other. If you cut off the bad programmers (by shunning or just not appealing to them, or loading your language with too many beginner footguns), there's no pipeline to creating experts, and the language dies when the experts do.
The people who come along to work on their legacy systems are better described as archaeologists than programmers. COBOL of course is the typical example, there's no real COBOL programming community to speak of, just COBOL archeologists who maintain those systems until they too shall die and it becomes someone else's problem, like the old Knight at the end of Indiana Jones.
I don't think it's going to thrive. It's going to die. Slowly, via attrition, but there you go.
I've been dabbling with Go for a few projects and found the type system for that to be pleasant and non-frustrating.
With Python, PHP and Javascript, you only option is "comprehensive tests and no types".
With statically typed languages, you have options other than "types with no tests". For example, static typing with tests.
Don't get me wrong; I like dynamically typed languages. I like Lisp in particular. But, TBH, in statically typed languages I find myself producing tests that test the business logic, while in Python I find myself producing tests that ensure all callers in a runtime call-chain have the correct type.
BTW: You did well to choose Go for dipping your toes into statically typed languages - the testing comes builtin with the tooling.
(and if you want to embrace static types, the language starting with them might get advantages over an optional backwards compatible type system)
You may have read this already but the biggest surprise one of the Go creators had was Go was motivated by unhappiness with C++, and they expected to get C++ users, but instead people came from Python and Ruby: https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
I've worked on large python codebases for large companies for the last ~6 years of my career; types have been the single biggest boon to developer productivity and error reduction on these codebases.
Just having to THINK about types eliminates so many opportunities for errors, and if your type is too complex to express it's _usually_ a code smell; most often these situations can be re-written in a more sane albeit slightly verbose fashion, rather than using the more "custom" typing features.
No one gets points for writing "magical" code in large organizations, and typing makes sure of this. There's absolutely nothing wrong with writing "boring" python.
Could we have accomplished this by simply having used a different language from the beginning? Absolutely, but often times that's not a option for a company with a mature stack.
TL;DR -- Typing in python is an exception tool to scale your engineering organization on a code-base.
But, boy have we gone overboard with this now? The modern libraries seem to be creating types for the sake of them. I am drowning in nested types that seem to never reach native types. The pain is code examples of the libraries don’t even show them.
Like copy paste an OpenAI example and see if LSP is happy for example. Now I have gotten in this situation where I am mentally avoiding type errors of some libraries and edging into wishing Pydantic et al never happened.
For those unaware, due to the dynamic nature of Python, you declare a variable type like this
foo: Type
This might look like Typescript, but it isn't because "Type" is actually an object. In python classes and functions are first-class objects that you can pass around and assign to variables.The obvious problem of this is that you can only use as a type an object that in "normal python" would be available in the scope of that line, which means that you can't do this:
def foo() -> Bar:
return Bar()
class Bar:
pass
Because "Bar" is defined AFTER foo() it isn't in the scope when foo() is declared. To get around this you use this weird string-like syntax: def foo() -> "Bar":
return Bar()
This already looks ugly enough that should make Pythonists ask "Python... what are you doing?" but it gets worse.If you have a cyclic reference between two files, something that works out of the box in statically typed languages like Java, and that works in Python when you aren't using type hints because every object is the same "type" until it quacks like a duck, that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints. To get around this, the solution is to use this monstrosity:
if typing.TYPE_CHECKING:
import Foo from foo
And that's code that only "runs" when the static type check is statically checking the types.Nobody wants Python 4 but this was such an incredibly convoluted way to add this feature, specially when you consider that it means every module now "over-imports" just to add type hints that they previously didn't have to.
Every time I see it makes me think that if type checks are so important maybe we shouldn't be programming Python to begin with.
Yes, `typing.TYPE_CHECKING` is there so that you can conditionally avoid imports that are only needed for type annotations. And yes, importing modules can have side effects and performance implications. And yes, I agree it's ugly as sin.
But Python does in fact allow for cyclic imports — as long as you're importing the modules themselves, rather than importing names `from` those modules. (By the way, the syntax is the other way around: `from ... import ...`.)
With python, because types are part of python itself, they can thus be programmable. You can create a function that takes in a typehint and returns a new typehint. This is legal python. For example below I create a function that dynamically returns a type that restricts a Dictionary to have a specific key and value.
from typing import TypedDict
def make_typed_dict(name: str, required_key: str, value_type: type):
return TypedDict(name, {required_key: value_type, id: int})
# Example
UserDict = make_typed_dict("UserDict", "username", str)
def foo(data: UserDict):
print(data["username"])
With this power in theory you can create programs where types essentially can "prove" your program correct, and in theory eliminate unit tests. Languages like idris specialize in this. But it's not just rare/specialized languages that do this. Typescript, believe it or not, has programmable types that are so powerful that writing functions that return types like the one above are Actually VERY common place. I was a bit late to the game to typescript but I was shocked to see that it was taking cutting edge stuff from the typing world and making it popular among users.In practice, using types to prove programs to be valid in place of testing is actually a bit too tedious compared with tests so people don't go overboard with it. It is a much more safer route then testing, but much harder. Additionally as of now, the thing with python is that it really depends on how powerful the typechecker is on whether or not it can enforce and execute type level functions. It's certainly possible, it's just nobody has done it yet.
I'd go further than this actually. Python is actually a potentially more powerfully typed language than TS. In TS, types are basically another language tacked onto javascript. Both languages are totally different and the typing language is very very limited.
The thing with python is that the types and the language ARE the SAME thing. They live in the same universe. You complained about this, but there's a lot of power in that because basically types become turing complete and you can create a type that does anything including proving your whole program correct.
Like I said that power depends on the typechecker. Someone needs to create a typechecker that can recognize type level functions and so far it hasn't happened yet. But if you want to play with a language that does this, I believe that language is Idris.
And, as you heavily imply in your post, type checkers won't be able to cope with it, eliminating one if the main benefits of type hints. Neither will IDEs / language servers, eliminating the other main benefit.
I implied no such thing. literally said there's a language that already does this. Typescript. IDE's cope with it just fine.
>That's not a benefit. That's a monstrosity.
So typescript is a monstrosity? Is that why most of the world who uses JS in node or the frontend has moved to TS? Think about it.
I am not that deeply familiar with Python typings development but it sounds fundamentally different to the languages you compare to.
https://www.youtube.com/watch?v=0mCsluv5FXA&t
Idris on the other hand is SPECIFICALLY designed so types and the program live in the same language. See the documentation intro: https://www.idris-lang.org/pages/example.html
THe powerful thing about these languages is that they can prove your program correct. For testing you can never verify your program to be correct.
Testing is a statistical sampling technique. To verify a program as correct via tests you have to test every possible input and output combination of your program, which is impractical. So instead people write tests for a subset of the possibilities which ONLY verifies the program as correct for that subset. Think about it. If you have a function:
def add(x: int, y: int) -> int
How would you verify this program is 100% correct? You have to test every possible combination of x, y and add(x, y). But instead you test like 3 or 4 possibilities in your unit tests and this helps with the overall safety of the program because of statistical sampling. If a small sample of the logic is correct, it says something about the entire population of the logic..Types on the other hand prove your program correct.
def add(x: int, y: int) -> int:
return x + y
If the above is type checked, your program is proven correct for ALL possible types. If those types are made more advanced via being programmable, then it becomes possible for type checking to prove your ENTIRE program correct.Imagine:
def add<A: addable < 4, B: addable < 4>(x: A, y: B) -> A + B:
return x + y
With a type checker that can analyze the above you can create a add function that at most can take an int that is < 4 and return an int that is < 8. Thereby verifying even more correctness of your addition function.Python on the other hand doesn't really have type checking. It has type hints. Those type hints can de defined in the same language space as python. So a type checker must read python to a limited extent in order to get the types. Python at the same time can also read those same types. It's just that python doesn't do any type checking with the types while the type checker doesn't do anything with the python code other than typecheck it.
Right now though, for most typecheckers, if you create a function in python that returns a typehint, the typechecker is not powerful enough to execute that function to find the final type. But this can certainly be done if there was a will because Idris has already done this.
If you insist on the same language for specifying types, some Lisp variants do that with a much nicer syntax.
Python people have been indoctrinated since ctypes that a monstrous type syntax is normal and they reject anything else. In fact Python type hints are basically stuck on the ctypes level syntax wise.
But that doesn't mean it's not useful to have this capability as part of your typesystem. It just doesn't need to be fully utilized.
You don't need to program a type that proves everything correct. You can program and make sure aspects of the program are MORE correct than just plain old types. typescript is a language that does this and it is very common to find types in typescript that are more "proofy" than regular types in other languages.
See here: https://www.hacklewayne.com/dependent-types-in-typescript-se...
Typescript does this. Above there's a type that's only a couple of lines long that proves a string reversal function reverses a string. I think even going that deep is overkill but you can define things like Objects that must contain a key of a specific string where the value is either a string or a number. And then you can create a function that dynamically specifies the value of the key in TS.
I think TS is a good example of a language that practically uses proof based types. The syntax is terrible enough that it prevents people from going overboard with it and the result is the most practical application of proof based typing that I seen. What typescript tells us that proof based typing need only be sprinkled throughout your code, it shouldn't take it all over.
TypeScript solves this with its own syntax that never gets executed by an interpreter because types are striped when TS is compiled to JS.
Easy make IO calls illegal in the type checker. The type checker of course needs to execute code in a sandbox. It won't be the full python language. Idris ALREADY does this.
def foo() -> "Bar":
return Bar()
But will throw an error if copy pasted into a REPL.However, all of these issues should be fixed in 3.14 with PEP649 and PEP749:
> At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object.
> This mechanism delays the evaluation of annotations expressions until the annotations are examined, which solves many circular reference problems.
This would have been the case if the semantics of the original PEP649 spec had been implemented. But instead, PEP749 ensures that it is not [0]. My bad.
This has many benefits, like forcing you to think about the dependencies and layers of your architecture. Here is a good read about why, from F# that has the same limitation https://fsharpforfunandprofit.com/posts/cyclic-dependencies/
As others already mentioned, importing __annotations__ also works.
> Well, these complaints are unfounded.
"You're holding it wrong." I've also coded quite a bit of OCaml and it had the same limitation (which is where F# picked it up in the first place), and while the issue can be worked around, it still seemed to creep up at times. Rust, also with some virtual OCaml ancestry, went completely the opposite way.
My view is that while in principle it's a nice property that you can read and and understand a piece of code by starting from the top and going to the bottom (and a REPL is going to do exactly that), in practice it's not the ultimate nice property to uphold.
Use typing.Self
class Foo:
def __init__(self, bar):
self.bar = bar
class Bar:
def __init__(self, foo):
self.foo = foo
Obviously both called into each other to do $THINGS... Pure madness.So my suggestion: Try not to have interdependent classes :D
Maybe I am just a bit burned by this particular example I ran into (where this pattern should IMO not have been used).
On the other hand, I tend to take it as a hint that I should look at my module structure, and see if I can avoid the cyclic import (even if before adding type hints there was no error, there still already was a "semantic dependency"...)
Don't have cyclic references between two files.
It makes testing very difficult, because in order to test something in one file, you need to import the other one, even though it has nothing to do with the test.
It makes the code more difficult to read, because you're importing these two files in places where you only need one of them, and it's not immediately clear why you're importing the second one. And it's not very satisfying to learn that you you're importing the second one not because you "need" it but because the circular import forces you to do so.
Every single time you have cyclic references, what you really have are two pieces of code that rely on a third piece of code, so take that third piece, separate it out, and have the first two pieces of code depend on the third piece.
Now things can be tested, imports can be made sanely, and life is much better.
Thought you were talking about TypeScript for a moment there.
Also python is far less aggressive with lint warnings so it is much easier to make mistakes
At first I thought it was because of the lack of types. But in actuality the lack of types was a detriment for python. It was an illusion. The reason why python felt so much better was because it had clear error messages and a clear path to find errors and bugs.
In C++ memory leaks and seg faults are always hidden from view so EVEN though C++ is statically typed, it's actually practically less safe then python and much more harder to debug.
The whole python and ruby thing exploding in popularity back in the day was a trick. It was an illusion. We didn't like it more because of the lack of typing. These languages were embraced because they weren't C or C++.
It took a decade for people to realize this with type hints and typescript. This was a huge technical debate and now all those people were against types are proven utterly wrong.
It's an illusion only you once had. Java (a language that is not C or C++) got mainstream way before Python.
Without typing it is literally 100x harder to refactor your code, types are like a contract which if are maintained after the refactor gives you confidence. Over time it leads to faster development
Duck typing is one of the best things about Python. It provides a developer experience second to none. Need to iterate over a collection of things? Great! Just do it! As long as it is an iterable (defined by methods, not by type) you can use it anywhere you want to. Want to create a data object that maps any hashable type to just about anything else? Dict has you covered! Put anything you want in there and don't worry about it.
If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen, so I've sacrificed my developer experience for an unfulfilled promise.
If I wanted to use a strongly typed language I would, I don't, and the creeping infection of type enforcement into production codebases makes it hard to use the language I love professionally.
Iterable[T]
> Want to create a data object that maps any hashable type to just about anything else?
Mapping[T, U]
However it is indeed annoying for those of us who liked writing Python 2.x-style dynamically-typed executable pseudocode. The community is now actively opposed to writing that style of code.
I don’t know if there’s another language community that’s more accepting of Python 2.x-style code? Maybe Ruby, or Lua?
Hell, python type annotations were only introduced in python 3.5, the language was 24 years old by then! So no, the way I write python is the way it was meant to be, type hints are the gadget that was bolted on when the language was already fully matured, it's pretty ridiculous painting code without type hints as unpythonic, that's the world upside down.
If I wanted to write very verbose typed code I would switch to Go or Rust. My python stays nimble, clean and extremely readable, without type hints.
Overall, I have found very few Python 3 features are worth adopting (one notable exception is f-strings). IMO most of them don’t pull their weight, and many are just badly designed.
Mypy was introduced with support for both for Python 2.x and 3.x (3.2 was the current) using type comments before Python introduced a standard way of using Python 3.0’s annotation syntax for typing; even when type annotations were added to Python proper, some uses now supported by them were left to mypy-style type comments in PEP 484/Python 3.5, with type annotations for variables added in PEP 526/Python 3.6.
And duck typing with the expected contract made explicit and subject to static verification (and IDE hinting, etc.) is one of the best things about Python typing.
> If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen
I find I end up at any given level of bug freeness with less effort and time with Python-with-types than Python-without-types (but I also like that typing being optional means that its very easy to toss out exploratory code before settling on how something new should work.)
Same.
Type hints also basically give me a "don't even bother running this if my IDE shows type warnings" habit that speeds up python development.
Absence of warnings doesn't guarantee me bug-free code but presence of warnings pretty much guarantees me buggy code.
Type hints are a cheap way to reduce (not eliminate) run time problems.
Sometimes you don't have a choice though, and other people have picked Python despite it rarely being the best language for any job.
In that case it's nice to be able to use static type hints and benefit from improved readability, productivity and reliability.
IMO, that complaint almost always goes with overuse of concrete types when abstract (Protocol/ABC) types are more accurate to the function of the code.
There was a time that that was a limitation in Python typing, but that that hasn’t been true for almost as long as Python typing had been available at all before it stopped being true.
If I had to rewrite a Python project, I would consider Rust or another statically typed language before choosing to continue in a dynamic language with types bolted on. I hope the situation improves for dynamic languages with optional types, but it still feels weird and bolted onto the language because it is.
I'm a professional .Net Core developer, but I'd throw my hat in the ring for Swift on this one. While obviously not exactly a 1:1 with Rust, there is definitely some common benefits between the two. Though, from what I understand of Rust (very little), its typing system is slightly more strict than Swift's which is slightly more strict than C#'s.
And if you re-use the same type store (SQLite DB) across multiple instrumented runs, you can further improve it.
https://github.com/RightTyper/RightTyper
(full disclosure, I am one of its authors).
Another issue is that abstract types are completely undocumented and have no tooling support. You say it's easier to use an abstract type. Can you tell me what I need to define to create a working subtype of AbstractDict? Or Number? Or IO? It's completely undefined, and the only way to do it is to just define the type and then try it out and patch when it breaks because a method was missing.
Finally, there is no multiple inheritance. That means I can't define something which is both a subtype of AbstractArray and IO, for example.
An ability to work with types within the language is already a win for me.
Also should check out ty by astral, it is pretty fast and does a good job at typechecking.
Besides, the lack of static typic is what makes a lot of the appeal for beginners. It's much harder to convince a non-CS beginner why they should bother with the extra burden of type hits. They are optional anyway and just slow folks down (so they might think). Careful with generally demanding that everybody use them.
But they probably help coding assistants to make fewer mistakes, so maybe that will soon be an argument if it isn't already. (That's an angle I expected in the article.)
Definitely not a best of both world type outcome
I've been using this sparingly: https://pypi.org/project/type-enforced/
Ideally, with static checking, the runtime shouldnn’t need to care about types because cide that typechecks shouldn’t be capable of not behaving according to the types declared.
Python, even with the most restrictive settings in nost typecheckers, may not quite achieve that, but it certainly reeuces the chance of surprises lf that kind compared to typing information in docstrings, or just locked away in the unststed assumptions of some developer.
When I was working on a fairly large TypeScript project it became the norm for dependencies to have type definitions in a relatively short space of time.
And now my personal opinion: If we are going the static typing way I would prefer simply to use Scala or similar instead of Python with types. Unfortunately in the same way that high performance languages like C attracts premature optimizers static types attract premature "abstracters" (C++ both). I also think that dynamic languages have the largest libraries for technical merit reasons. Being more "fluid" make them easier to mix. In the long term the ecosystem converges organically on certain interfaces between libraries.
And so here we are with the half baked approach of gradual typing and #type: ignore everywhere.
Types seem like a “feature” of mature software. You don’t need to use them all the time, but for the people stuck on legacy systems, having the type system as a tool in their belt can help to reduce business complexity and risk as the platform continues to age because tooling can be built to assert and test code with fewer external dependencies.
* Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
* Types are incredibly valuable on hardened production code.
* Most good production code started out spikey, experimental or as an MVP and transitioned.
And so here we are with gradual typing because "throwing away all the code and rewriting it to be "perfect" in another language" has been known for years to be a shitty way to build products.
Im mystified that more people here dont see that the value and cost of types is NOT binary ("they're good! theyre bad!") but exists on a continuum that is contingent on the status of the app and sometimes even the individual feature.
This is clearly seen with typescript and the movement for "just use JS".
Furthermore, with LLMs, it should be easier than ever to experiment in one language and use another language for production loads.
Press "X" to doubt. Types help _a_ _lot_ by providing autocomplete, inspections, and helping with finding errors while you're typing.
This significantly improves the iteration speed, as you don't need to run the code to detect that you mistyped a varible somewhere.
The more interesting questions, like “should I use itertools or collections?” Autocomplete can’t help with.
Even within a recent toy 1h python interview question having types would've saved me some issues and caught an error that wasn't obvious. Probably would've saved 10m in the interview.
For me I often don't feel any pain-points when working before about 1kloc (when doing JS), however if a project is above 500loc it's often a tad painful to resume it months later when I've started to forget why I used certain data-structures that aren't directly visible (adding types at that point is usually the best choice since it gives a refresher of the code at the same time as doing a soundness check).
Type strictness also isnt binary. A program with lots of dicts that should be classes doesnt get much safer just because you wrote : dict[str, dict] everywhere.
This is what people say, but I don't think it's correct. What is correct is that say, ten to twenty years ago, all the statically typed languages had other unacceptable drawbacks and "types bad" became a shorthand for these issues.
I'm talking about C (nonstarter for obvious reasons), C++ (a huge mess, footguns, very difficult, presumably requires a cmake guy), Java (very restrictive, slow iteration and startups, etc.). Compared to those just using Python sounds decent.
Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).
It's common for Rust to become very difficult to iterate in.
But Java was the high-level, GCed, application development language - and more importantly, it was the one dominating many university CS studies as an education language before python took that role. (Yeah, I'm grossly oversimplifying - sincere apologies to the functional crowd! :) )
The height of the "static typing sucks!" craze was more like a "The Java type system sucks!" craze...
Not to mention boilerplate BS.
Recently, Java has improved a lot on these fronts. Too bad it’s twenty-five years late.
I find I’ve spent so much time writing with typed code that I now find it harder to write POC code in dynamic languages because I use types to help reason about how I want to architect something.
Eg “this function should calculate x and return”, well if you already know what you want the function to do then you know what types you want. And if you don’t know what types you want then you haven’t actually decided what that function should do ahead of building it.
Now you might say “the point of experimental code is to figure out what you want functions to do”. But even if you’re writing an MVP, you should know what that each function should do by the time you’ve finished writing it. Because if you don’t know who to build a function then how do you even know that the runtime will execute it correctly?
While a boon during prototyping, a project may need more structural support as the design solidifies, it grows, or a varied, growing team takes responsibility.
At some point those factors dominate, to the extent “may need” support approaches “must have.”
But when it comes to refactoring, having type safety makes it very easy to use static analysis (typically the compiler) check for type-related bugs during that refactor.
I’ve spent a fair amount of years in a great many different PL paradigms and I’ve honestly never found loosely typed languages any fast for prototyping.
That all said, I will say that a lot of this also comes down to what you’re used to. If you’re used to thinking about data structures then your mind will go straight there when prototyping. If you’re not used to strictly typed languages, then you’ll find it a distraction.
Writing map = {}, is a few times faster than map: Dictionary[int, str] = {}. Now multiply by ten instances. Oh wait, I’m going to change that to a tuple of pairs instead.
It takes me about three times longer to write equivalent Rust than Python, and sometimes it’s worth it.
Let’s take Visual Basic 6, for example. That was very quick to prototype in even with “option explicit” (basically forcing type declarations) defined. Quicker, even, than Python.
Typescript isn’t any slower to prototype in than vanilla JavaScript (bar setting up the build pipeline — man does JavaScript ecosystem really suck at DevEx!).
Writing map = {} only saves you a few keystrokes. And Unless you’re typing really slowly with one finger like an 80 year old using a keyboard for the first time, you’ll find the real input bottleneck isn’t how quickly you can type your data structures into code, but how quickly your brain can turn a product spec / Jira ticket into a mental abstraction.
> Oh wait, I’m going to change that to a tuple of pairs instead
And that’s exactly when you want the static analysis of a strict type system to jump in and say “hang on mate, you’ve forgotten to change these references too” ;)
Having worked on various code bases across a variety of different languages, the refactors that always scare me the most isn’t the large code bases, it’s the ones in Python or JavaScript because I don’t have a robust type system providing me with compile-time safety.
There’s an old adage that goes something like this: “don’t put off to runtime what can be done in compile time.”
As computers have gotten exponentially faster, we’ve seemed to have forgotten this rule. And to our own detriment.
Cementing that in early on is a big pre-optimization (ie waste) when it has a large likelyhood of being deleted. Refactors are not large at this point, and changes trivial to fix.
Also, in my experience, the long time for software arrives in a couple of weeks.
I guess Python is next.
Count the amount of `Any` / `unknown` / `cast` / `var::type` in those codebases, and you'll notice that they aren't particularly statically typed.
The types in dynamic languages are useful for checking validity in majority of the cases, but can easily be circumvented when the types become too complicated.
It is somewhat surprising that dynamic languages didn't go the pylint way, i.e. checking the codebase by auto-determined types (determined based on actual usage).
I’ve heard this before, but it’s not really true. Yes, maybe the majority of JavaScript code is now statically-typed, via Typescript. Some percentage of Python code is (I don’t know the numbers). But that’s about it.
Very few people are using static typing in Ruby, Lua, Clojure, Julia, etc.
OTOH I’m not arguing that most code should be dynamically-typed. Far from it. But I do think dynamic typing has its place and shouldn’t be rejected entirely.
Also, I would have preferred it if Python had concentrated on being the best language in that space, rather than trying to become a jack-of-all-trades.
[0] https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...
For the average Julia package I would guess, that most types are statically known at compile time, because dynamic dispatch is detrimental for performance. I consider, that to be the definition of static typing.
That said, Julia functions seldomly use concrete types and are generic by default. So the function signatures often look similar to untyped Python, but in my opinion this is something entirely different.
All the languages you name are niche languages compared to Python, JS (/ TS) and PHP. Whether you like it or not.
Many years ago I felt Java typing could be overkill (some types could have been deduced from context and they were too long to write) so probably more an issue about the maturity of the tooling than anything else.
E.g. when Python is used as a 'scripting language' instead of a 'programming language' (like for writing small command line tools that mainly process text), static typing often just gets in the way. For bigger projects where static typing makes sense I would pick a different language. Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).
Note2: From my experience, in Java, i have NEVER seen a function that consumes explicitely an Object. In Java, you always name things. Maybe with parametric polymorphism, to capture complex typing patterns.
Note 3: unfortunately, you cannot subclass String, to capture the semantic of its content.
So you did not see any Java code from before version 5 (in 2004) then, because the language did not have generics for the first several years it was popular. And of course many were stuck working with older versions of the language (or variants like mobile Java) without generics for many years after that.
Probably because the adoption of the generics has been absolutely massive in the last 20 years. And I expect the same thing to eventually happen with Typescript and [typed] Python.
[*]: nor have I seen EJB1 or even EJB2. Spring just stormed them, in the last 20 years.
Many such cases.
I'd be interested in seeing you expand on this, explaining the ways you feel Python doesn't make the cut for programming language while doing so for scripting.
The reason I say this is because, intuitively, I've felt this way for quite some time but I am unable to properly articulate why, other than "I don't want all my type errors to show up at runtime only!"
Everybody knows the limitations of JSON. Don't state the obvious problem without stating a proposed solution.
Exchanging RDF, more precisely its [more readable] "RDF/turtle" variant, is probably what will eventually come to the market somehow.
Each object of a RDF structure has a global unique identifier, is typed, maintains typed links with other objects, have typed values.
https://search.datao.net/beta/?q=barack%20obama
Open your javascript console, and hover the results on the left hand side of the page with your mouse. The console will display which RDF message triggered the viz in the center of the page.
Update: you may want to FIRST select the facet "DBPedia" at the top of the page, for more meaningful messages exchanged.
Update2: the console does not do syntax higlighting, so here is the highlighted RDF https://datao.net/ttl.jpg linked to the 1st item of " https://search.datao.net/beta/?q=films%20about%20barack%20ob... "
JSON forces you to fit your graph of data into a tree structure, that is poorly capturing the cardinalities of the original graph.
Plus of course, the concept of object type is not existing in JSON.
You have to admit that the size and complexity of the software we write has increased dramatically over the last few "decades". Looking back at MVC "web applications" I've created in the early 2000s, and comparing them to the giant workflows we deal with today... it's not hard to imagine how dynamic typing was/is ok to get started, but when things exceed one's "context", you type hints help.
Yes, it was your responsibility to keep track of correctness, but that also taught me to write better code, and better tests.
Dynamic typing also gives tooling such as LSPs and linters a hard time figuring out completions/references lookup etc. Can't imagine how people work on moderate to big projects without type hints.
IMHO the idea of a complex and inference heavy type system that is mostly useless at runtime and compilation but focused on essentially interactive linting is relatively recent and its popularity is due to typescript success
I think that static typing proponents were thinking of something more along the lines of Haskell/OCaml/Java rather than a type-erased system a language where [1,2] > 0 is true because it is converted to "NaN" > "0"
personally i like the dev-sidecar approach to typing that Python and JS (via TS) have taken to mitigate the issue.
I do want to be able to write a dynamically typed function or subsystem during the development phase, and „harden” with types once I’m sure I got the structure down.
But the dynamic system should fit well into the language, and I should be able to easily and safely deal with untyped values and convert them to typed ones.
Sometimes at about TypeScript 2.9 finally started adding constructs that made gradual typing of real-world JS code sane, but by then there was a stubborn perception of it being bad/bloated/Java-ish,etc despite maturing into something fairly great.
They don't. They become gradually typed which is a thing of it's own.
You can keep the advantages of dynamic languages, the ease of prototyping but also lock down stuff when you need to.
It is not a perfect union, generally the trade-off is that you can either not achieve the same safety level as in a purely statically typed language because you need to provide same escape hatches or you need a extremely complex type system to catch the expressiveness of the dynamic side. Most of the time it is a mixture of both.
Still, I think this is the way to go. Not dynamic typing won or static typing won but both a useful and having a language support both is a huge productivity boost.
For example what I like about PHPStan (tacked on static analysis through comments), that it offers so much flexibility when defining type constraints. Can even specify the literal values a function accepts besides the base type. And subtyping of nested array structures (basically support for comfortably typing out the nested structure of a json the moment I decode it).
What happened to Python is that it used to be a "cool" language, whose community liked to make fun of Java for their obsession with red-taping, which included the love for specifying unnecessary restrictions everywhere. Well, just like you'd expect from a poorly functioning government office.
But then everyone wanted to be cool, and Python was adopted by the programming analogue of the government bureaucrats: large corporations which treat programming as a bureaucratic mill. They don't want fun or creativity or one-of bespoke solutions. They want an industrial process that works on as large a scale as possible, to employ thousands of worst quality programmers, but still reliably produce slop.
And incrementally, Python was made into Java. Because, really, Java is great for producing slop on an industrial scale. But the "cool" factor was important to attract talent because there used to be a shortage, so, now you have Python that was remade to be a Java. People who didn't enjoy Java left Python over a decade ago. So that Python today has nothing in common with what it was when it was "cool". It's still a worse Java than Java, but people don't like to admit defeat, and... well, there's also the sunk cost fallacy: so much effort was already spent at making Python into a Java, that it seems like a good idea to waste even more effort to try to make it a better Java.
And I think they can be correct for rejecting it, banging out a small useful project (preferably below 1000 loc) flows much faster if you just build code doing things rather than start annotating (that quickly can be come a mind-sinkhole of naming decisions that interrupts a building flow).
However, even less complex 500 loc+ programs without typing can become a pita to read after the fact and approaching 1kloc it can become a major headache to pick up again.
Basically, can't beat speed of going nude, but size+complexity is always an exponential factor in how hard continuing and/or resuming a project is.
Automatic type inference and dynamic typing are totally different things.
dynamic x = "Forces of Darkness, grant me power";
Console.WriteLine(x.Length); // Dark forces flow through the CLR
x = 5;
Console.WriteLine(x.Length); // Runtime error: CLR consumed by darkness.
C# also has the statically typed 'object' type which all types inherit from, but that is not technically a true instance of dynamic typing.
When you use `var`, everything is as statically typed as before, you just don't need to spell out the type when the compiler can infer it. So you can't (for example) say `var x = null` because `null` doesn't provide enough type information for the compiler to infer what's the type of `x`.
this is a lovely double entendre
When JavaScript programs were a few hundred lines to add interactivity to some website type annotationd were pretty useless. Now the typical JavaScript project is far larger and far more complex. The same goes for python.
If one thinks back to some of the early statically typed languages, you'd have a huge rift: You either have this entirely weird world of Caml and Haskell (which can express most of what python type hints have, and could since many years), and something like C, in which types are merely some compiler hints tbh. Early Java may have been a slight improvement, but eh.
Now, especially with decent union types, you can express a lot of idioms of dynamic code easily. So it's a fairly painless way to get type completion in an editor, so one does that.
It’s valid to say “you don’t need types for a script” and “you want types for a multi-million LOC codebase”.
JSON's interaction with types is still annoying. A deserialized JSON could be any type. I wish there was a standard python library that deserialized all JSON into dicts, with opinionated coercing of the other types. Yes, a custom normalizer is 10 lines of code. But, custom implementations run into the '15 competing standards' problem.
Actually, there should be a popular type-coercion library that deals with a bunch of these annoying scenarios. I'd evangelize it.
I use plenty of statically typed languages, Python's type hinting does not bring me joy.
Can you help me out with an example of a Python usage pattern against which the type system seems to be fighting?
I often use Python for data munging and I'll frequently write code that goes
foo = initial_value
...
foo = paritally_cleaned_up_value
...
if check:
foo = fianllylikethis
else:
foo = orlikethis
Where the type of the value being assigned to foo is different each time. Now, obviously (in this simplistic example that misses subtleties) I could declare a new variable for each transformation step or do some composite type building type thing or refactor this into separate functions for each step that requires a different type but all of those options are unnecessary busy work for what should be a few simple lines of code.> all of those options are unnecessary busy work for what should be a few simple lines of code
If you re-type your variable often, then how do you make sure you’re really keeping track of all those types?
If you re-type it only a few times, then I’m not entirely convinced that declaring a few additional variables really constitutes busywork.
Small example with additional variables instead of re-typing the same variable:
# pylint: disable=disallowed-name, missing-function-docstring, missing-module-docstring, redefined-outer-name
from typing import NewType
NEEDS_CHECKING = True
NotCleaned = NewType("NotCleaned", str)
Checked = NewType("Checked", str)
Cleaned = NewType("Cleaned", str)
original_foo = ["SOME ", "dirty ", " Data"]
annotated_foo = [NotCleaned(item) for item in original_foo]
cleaned_foo = [
Cleaned(item.lower().strip().replace("dirty", "tidy"))
for item in annotated_foo
]
foo: list[Checked | Cleaned]
if NEEDS_CHECKING:
for idx, item in enumerate(cleaned_foo):
if item and (item[0] == " " or item[-1] == " "):
raise RuntimeError(f"Whitespace found in item #{idx}: {item=}")
if "dirt" in item:
raise RuntimeError(f"Item #{idx} is dirty: {item=}")
foo = [Checked(item) for item in cleaned_foo]
else:
foo = list(cleaned_foo)
print(foo)
# => ['some', 'tidy', 'data']
This survives strict type checking (`mypy --strict`). I don’t feel that renaming the variables introduces much noise or busywork here? One might argue that renaming even adds clarity?TIL, thank you!
"if a parameter is typed as an int, then only run the specialized 'int' code to process it"
This would increase performance and make typing more useful
I guess it would work with the ongoing jit work, which (as far as I understood..) run the code "as usual", then notice that a specific variable is always a dict (or whatever). Then it patches the code to run the dict-optimized code by default (and fallback to the generic code if, somehow, the variable is no longer a dict).
With typing, the generic code could be avoided altogether. The algorithm would be:
- notice that some variable can be processed by a dict-optimized code (because its typing is a dict, or something that looks like a dict etc)
- when processing, check that the variable is indeed a "dict", raise an exception if not
- run the optimized code
- if the typing information changes (because the class has been redefined and the variable is no longer a "dict"), then go to step 1 and either stick with the current optimized code, use another one, or use the generic one
This would: - enforce types (you said that variable is a Thing but a Thing was not given: exception)
- improve the jit by removing the bootstrap phase (where the jit watches and then try to guess what could be improved)
(or perhaps this is a stupid idea that cannot work :) )It's not perfect in Python, and I see some developers introduce unnecessary patterns trying to make type-"perfect" `class Foo(Generic[T, V])` (or whatever) abstractions where they are not really necessary. But if the industry is really going all-in on Python for more than scripting, so should we for typed Python.
Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.
You should not see type hints as real, hard types, but more as a kind of documentation that helps your linter and type checker catch mistakes.
Because you can now use typing WITH the entire Python ecosystem.
Like back in 2.7 difference between byte-array and string... Offloading such cases is mentally useful.
That said, I’m looking into the stubs.
I personally largely prefer the first kind, but it seems even the standard formatting rules are against it (two empty lines between free functions etc.)
And it happens quite often in large codebases. Sometimes external dependencies report wrong types, e.g., a tuple instead of a list. It's easy to make such a mistake when a library is written in a compiled language and just provides stubs for types. Tuples and lists share the same methods, so it will work fine for a lot of use cases. And since your type checker will force you to use a tuple instead of a list, you will never know that it's actually a list that can be modified unless you disable type checking and inspect the data.
But they both seem to handle typing similarly.
I can't put my finger on why. Anybody else?
It's not quite all or nothing, but it's annoying to work with it if you only use it for some things and not for others. I find that if you have a mixture of TS and JS in various files I would rather just go all in on TypeScript so I don't have to manually annotate.
With Python you're still just working with Python files.
At my work we have a jit compiler that requires type hints under some conditions.
Aside from that, I avoid them as much as possible. The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
For example a common failure mode in my work’s codebase is that some function will take something that is indexable by ints. The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types! And you better believe that every single admissible type will eventually be fed to this function. Sometimes people will try to keep up, annotating it with a Union of the (growing) list of admissible types, but eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
So, if the jit compiler needs the annotation I am happy to provide it, but otherwise I will proactively not provide any, and I will sometimes even delete existing annotations when they are devolving into silliness.
The bigger problem is that the type system expressed through hints in Python is not the type system Python is actually using. It's not even an approximation. You can express in the hint type system things that are nonsense in Python and write Python that is nonsense in the type system implied by hints.
The type system introduced through typing package and the hints is a tribute to the stupid fashion. But, also, there is no syntax and no formal definitions to describe Python's actual type system. Nor do I think it's a very good system, not to the point that it would be useful to formalize and study.
In Russian, there's an expression "like a saddle on a cow", I'm not sure what the equivalent in English would be. This describes a situation where someone is desperately trying to add a desirable feature to an exiting product that ultimately is not compatible with such a feature. This, in my mind, is the best description of the relationship between Python's actual type system and the one from typing package.
“To fit a square peg into a round hole”
T_co = TypeVar("T_co", covariant=True)
class Indexable(Protocol[T_co]): def __getitem__(self, i: int) -> T_co: ...
def f(x: Indexable[str]) -> None: print(x[0])
I am failing to format it proprely here, but you get the idea.
Sequence[SupportsFloat] | Mapping[int,SupportsFloat]
Whether or not you explicitly write out the type, I find that functions with this sort of signature often end up with code that checks the type of the arguments at runtime anyway. This is expensive and kind of pointless. Beware of bogus polymorphism. You might as well write two functions a lot of the time. In fact, the type system may be gently prodding you to ask yourself just what you think you’re up to here.This is really just the same mistake as the original expanding union, but with overly narrow abstract types instead of overly narrow concrete types. If it relies on “we can use indexing with an int and get out something whose type we don’t care about”, then its a Protocol with the following method:
def __getitem__(self, i: int, /) -> Any: ...
More generally, even if there is a specific output type when indexing, or the output type of indexing can vary but in a way that impacts the output or other input types of the function, it is a protocol with a type parameter T and this method: def __getitem__(self, i: int, /) -> T: ...
It doesn’t need to be union of all possible concrete and/or abstract types that happen to satisfy that protocol, because it can be expressed succinctly and accurately in a single Protocol.> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)
If you'd want monospace you should indent the snippet with two or more spaces:
from typing import Protocol, TypeVar
T_co = TypeVar("T_co", covariant=True)
class Indexable(Protocol[T_co]):
def __getitem__(self, i: int) -> T_co: ...
def f(x: Indexable[str]) -> None:
print(x[0])Generally it’s not worth trying to fix this stuff. The type signature is hell to write and ends up being super complex if you get it to work at all. Write a cast or Any, document why it’s probably ok in a comment, and move on with your life. Pick your battles.
So, just:
class Indexable[T](Protocol):
def __getitem__(self, i: int,/) -> T: ...
is enough.For broad things, write Any or skip it.
> The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
I'll admit that this is what I hate Python, and it's probably this spirit of the language as you call it. I never really know what parameters a function takes. Library documentation often shows a few use cases, but doesn't really provide a reference; so I end up having to dig into the source code to figure it out on my own. Untyped and undocumented kwargs? Everywhere. I don't understand how someone could embrace so much flexibility that it becomes entirely undiscoverable for anyone but maintainers.
Well, you could say that the problem in this case was the lack of documentation, if you wanted. The type signature could be part of the documentation, from this point of view.
Let me give a kind-of-concrete example: one year I was working through a fast.ai course. They have a Python layer above the raw ML stuff. At the time, the library documentation was mediocre: the code worked, there were examples, and the course explained what was covered in the course. There were no type hints. It's free (gratis), I'm not complaining. However, once I tried making my own things, I constantly ran into questions about "can this function do X" and it was really hard to figure out whether my earlier code was wrong or whether the function was never intended to work with the X situation. In my case, type hints would have cleared up most of the problems.
If the code base expects flexibility, trusting documentation is the last thing you'd want to do. I know some people live and die by the documentation, but that's just a bad idea when duck typing or composition is heavily used for instance, and documentation should be very minimal in the first place.
When a function takes a myriad of potential input, "can this function do X" is an answer you get by reading the function or the tests, not the prose on how it was intended 10 years ago or how some other random dev thinks it works.
from typing import Protocol
class SupportsQuack(Protocol):
def quack(self) -> None: ...
This of course works with dunder methods and such. Also you can annotate with @runtime_checkable (also from typing) to make `isinstance`, etc work with itImagine one of your function just wants to move an iterator forward, and another just wants the current position. You're stuck with either requiring a full iterator interface when only part of it is needed or create one protocol for each function.
In day to day life that's dev time that doesn't come back as people are now spending time reading the protocol spaghetti instead of reading the function code.
I don't deny the usefulness of typing and interfaces in stuff like libraries and heavily used common components. But that's not most of your code in general.
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import SupportsGT
---In the abstract sense though, most code in general can't work with anything that quack()s or it would be incorrect to. The flip method on an penguin's flipper in a hypothetical animallib would probably have different implications than the flip method in a hypothetical lightswitchlib.
Or less by analogy, adding two numbers is semantically different than adding two tuples/str/bytes or what have you. It makes sense to consider the domain modeling of the inputs rather than just the absolute minimum viable to make it past the runtime method checks.
But failing that, there's always just Any if you legitimately want to allow any input (but this is costly as it effectively disables type checking for that variable) and is potentially an indication of some other issue.
[1]: https://docs.python.org/3.14/library/collections.abc.html
No, you are creating a Protocol (the kind of Python type) for every protocol (the descriptive thing the type represents) that is relied on for which an appropriate Protocol doesn’t already exist. Most protocols are used in more than one place, and many common ones are predefined in the typing module in the standard library.
Python type hints manage to largely preserve the flexibility while seriously increasing confidence in the correctness, and lack of crashing corner cases, of each component. There's really no good case against them at this point outside of one-off scripts. (And even there, I'd consider it good practice.)
As a side bonus, lack of familiarity with Python type hints is a clear no-hire signal, which saves a lot of time.
Never has been an issue in practice...
I work at big tech and the number of bad deploys and reverts I've seen go out due to getting types wrong is in the hundreds. Increased type safety would catch 99% of the reverts I've seen.
What I've experienced is that other factors make the biggest difference. Teams that write good tests, have good testing environments, good code review processes, good automation, etc tend to have fewer defects and higher velocity. Choice of programming language makes little to no difference.
So does Python:
Type annotations can seem pointless indeed if you are unwilling to learn how to use them properly. Using a giant union to type your (generic) function is indeed silly, you just have to make that function generic as explained in another comment or I guess remove the type hints
That in itself violates the spirit of the language, IMO. “There should be one obvious way to do it”.
- There is one obvious way to provide type hints for your code, it’s to use the typing module provided by the language which also provides syntax support for it.
- You don’t have to use it because not all code has to be typed
- You can use formatted strings, but you don’t have to
- You can use comprehensions but you don’t have to
- You can use async io, but you don’t have to. But it’s the one obvious way to do it in python
The obvious way to annotate a generic function isn’t with a giant Union, it’s with duck typing using a Protocol + TypeVar. Once you known that, the obvious way is… pretty obvious.
The obvious way not be bothered with type hints because you don’t like them is not to use them!
Python is full of optional stuff, dataclasses, named tuples, meta programming, multiple ancestor inheritance. You dont have to use these features, but there are only one way to use them
Optional nature of those features conflicts with this statement. As optionality means two ways already.
But times change and these days, Python is a much larger language with a bigger community, and there is a lot more cross-pollination between languages as basic philosophical differences between the most popular languages steadily erode until they all do pretty much the same things, just with different syntax.
It never was a thing in Python, it is a misquote of the Zen of Python that apparently became popular as a reaction against the TMTOWTDI motto of the Perl community.
The misquote shifts the emphasis to uniqueness rather than having an obvious way to accomplish goals, and is probably a result of people disliking the “There is more than one way to do it” adage of Perl (and embraced by the Ruby community) looking to the Zen to find a banner for their opposing camp.
So on the language level it doesn’t directly change the behavior, but it is possible to use the types to affect the way code works, which is unintuitive. I think it was a bad decision to allow this, and Python should have opted for a TypeScript style approach.
Lots of very useful tooling such as dataclasses and framework like FastAPI rely on this and you're opinion is that it's a bad thing why?
In typescript the absence of type annotations reflection at runtime make it harder to implement things that people obviously want, example, interop between typescript and zod schemas. Zod resorts instead to have to hook in ts compiler to do these things.
I'm honestly not convinced Typescript is better in that particular area. What python has opted for is to add first class support for type annotations in the language (which Javascript might end up doing as well, there are proposals for this, but without the metadata at runtime). Having this metadata at runtime makes it possible to implement things like validation at runtime rather than having to write your types in two systems with or without codegen (if Python would have to resort to codegen to do this, like its necessary in typescript, I would personally find this less pythonic).
I think on the contrary it allows for building intuitive abstractions where typescript makes them harder to build?
interface IntIndexable {
[key: number]: any
}You can specify a protocol like this:
class IntIndexable(Protocol[T]):
def __getitem__(self, index: int, /) -> T: ...
(Edit: formatting)Although I understant that it might have been just a simplified example. Usually the "Real World" can get very complex.
Yes it is. I believe the reason is that this is all valid python while typescript is not valid javascript. Also, python's type annotations are available at runtime (eg. for introspection) while typescript types aren't.
That said, typescript static type system is clearly both more ergonomic and more powerful than Python's.
That era of Python codebases were miserable to work in, and often ended up in the poorly though out "we don't know how this works and it has too many bugs, let's just rewrite it" category.
My position is that what is intended must be made clear between type hints and the docstring. Skipping this makes for difficult to read code and has no place in a professional setting in any non-trivial codebase.
This doesn't require type hints to achieve. :param and :rtype in the docstring are fine if type hints aren't present, or for complex cases, plain English in the docstring is usually better.
Proper type hints are typically very easy to add if the codebase is not a mess that passes things around far and wide with no validation. If it is, the problem is not with the type hints.
>from typing import Sequence
>def third(something: Sequence):
> return indexable[3]
however if all you are doing is just iterate over the thing, what you actually need is an Iterable
>from typing import Iterable
>def average(something:Iterable):
> for thing in something:
> ...
Statistically, the odds of a language being wrong, are much lower than the programmer being wrong. Not to say that there aren't valid critiques of python, but we must think of the creators of programming languages and their creations as the top of the field. If a 1400 chess elo player criticizes Magnus Carlsen's chess theory, it's more likely that the player is missing some theory rather than he found a hole in Carlsen's game, the player is better served by approaching a problem with the mentality that he is the problem, rather than the master.
The people at the top of the type-system-design field aren’t working on Python.
No world class expert is going to contribute to Python after 2020 anyway, since the slanderous and libelous behavior of the Steering Council and the selective curation of allowed information on PSF infrastructure makes the professional and reputational risk too high. Apart from the fact that Python is not an interesting language for language experts.
Google and Microsoft have already shut down several failed projects.
I get the idea that Python and Java went in opposite directions. But I'm not aware of any fight between both languages. I don't think that's a thing either.
Regarding stuff that happens in the 2020. Python was developed in the 90s, python 3 was launched in 2008. Besides some notable PEPs like type hints, WSGI, the rest of development are footnotes. The same goes for most languages (with perhaps the exception of the evergrowing C++), languages make strong bc guarantees and so the bulk of their innovation comes from the early years.
Whatever occurs in the 20th and 30th year of development is unlikely to be revolutionary or very significant. Especially ignoreable is the drama that might emerge in these discussions, slander, libel inter-language criticism?
Just mute that out. I've read some news about some communities like Ruby on Rails or Nix that become overtaken by people and discussions of political nature rather than development, they can just be ignored I think.
Could you elaborate on this?
Before that, Google moved heavily from Python to Go.
Microsoft fired the "Faster CPython Team" this year.
For example the dart/flutter team was decimated as well.
Sequence involves more than just __getitem__ with an int index, so if it really is anything int indexable, a lighter protocol with just that method will be more accurate, both ar conveying intent and at avoiding needing to evolve into an odd union type because you have something that a satisfies the function’s needs but not the originally-defined type.
That's your problem right there. Why are random callers sending whatever different input types to that function?
That said, there are a few existing ways to define that property as a type, why not a protocol type "Indexable"?
it was a sin that python's type system was initially released as a nominal type system. they should have been the target from day one.
being unable to just say "this takes anything that you can call .hello() and .world() on" was ridiculous, as that was part of the ethos of the dynamically typed python ecosystem. typechecking was generally frowned upon, with the idea that you should accept anything that fit the shape the receiving code required. it allowed you to trivially create resource wrappers and change behaviors by providing alternate objects to existing mechanisms. if you wanted to provide a fake file that read from memory instead of an actual file, it was simple and correct.
the lack of protocols made hell of these patterns for years.
The current docs are "Microsoft-like", they have everything, spread through different pages, in different hierarchies, some of them wrong, and with nothing telling you what else exists.
Really it enabled the Python type system to work as well as it does, as opposed to TypeScript, where soundness is completely thrown out except for some things such as enums
Nominal typing enables you to write `def ft_to_m(x: Feet) -> Meters: and be relatively confident that you're going to get Feet as input and Meters as output (and if not, the caller who ignored your type annotations is okay with the broken pieces).
The use for protocols in Python in general I've found in practice to be limited (the biggest usefulness of them come from the iterable types), when dealing with code that's in a transitional period, or for better type annotations on callables (for example kwargs, etc).
Most Python's dunder methods make it so you can make "behave alike" objects for all kinds of behaviors, not just iterables
Because it’s nice to reuse code. I’m not sure why anyone would think this is a design issue, especially in a language like Python where structural subtyping (duck typing) is the norm. If I wanted inheritance soup, I’d write Java.
Ironically, that’s support for structural subtyping is why Protocols exist. It’s too bad they aren’t better and the primary way to type Python code. It’s also too bad that TypedDict actively fought duck typing for years.
Python’s type system is overall pretty weak, but with any static language at least one of the issues is that the type system can’t express all useful and safe constructs. This leads to poor code reuse and lots of boilerplate.
This kind of accidental compatibility is a source of many hard bugs. Things appear to work perfectly, then at some point it does something subtly different, until it blows up a month later
Probably because the actual type it takes is well-understood (and maybe even documented in informal terms) by the people making and using it, but they just don’t understand how to express it in the Python type system.
sqlalchemy.orm.relationship(argument: _RelationshipArgumentType[Any] | None = None, secondary: _RelationshipSecondaryArgument | None = None, *, uselist: bool | None = None, collection_class: Type[Collection[Any]] | Callable[[], Collection[Any]] | None = None, primaryjoin: _RelationshipJoinConditionArgument | None = None, secondaryjoin: _RelationshipJoinConditionArgument | None = None, back_populates: str | None = None, order_by: _ORMOrderByArgument = False, backref: ORMBackrefArgument | None = None, overlaps: str | None = None, post_update: bool = False, cascade: str = 'save-update, merge', viewonly: bool = False, init: _NoArg | bool = _NoArg.NO_ARG, repr: _NoArg | bool = _NoArg.NO_ARG, default: _NoArg | _T = _NoArg.NO_ARG, default_factory: _NoArg | Callable[[], _T] = _NoArg.NO_ARG, compare: _NoArg | bool = _NoArg.NO_ARG, kw_only: _NoArg | bool = _NoArg.NO_ARG, lazy: _LazyLoadArgumentType = 'select', passive_deletes: Literal['all'] | bool = False, passive_updates: bool = True, active_history: bool = False, enable_typechecks: bool = True, foreign_keys: _ORMColCollectionArgument | None = None, remote_side: _ORMColCollectionArgument | None = None, join_depth: int | None = None, comparator_factory: Type[RelationshipProperty.Comparator[Any]] | None = None, single_parent: bool = False, innerjoin: bool = False, distinct_target_key: bool | None = None, load_on_pending: bool = False, query_class: Type[Query[Any]] | None = None, info: _InfoType | None = None, omit_join: Literal[None, False] = None, sync_backref: bool | None = None, **kw: Any) → Relationship[Any]Python’s types are machine-checkable constraints on the behavior of your code.. Failing the type checker isn’t fatal, it just means you couldn’t express what you were doing in terms it could understand. Although this might mean you need to reconsider your decisions, it could just as well mean you’re doing something perfectly legitimate and the type checker doesn’t understand it. Poke a hole in the type checker using Any and go on with your day. To your example, there are several ways described in comments by me and others to write a succinct annotation, and this will catch cases where somebody tries to use a dict keyed with strings or something.
Anyway, you don’t have to burn a lot of mental energy on them, they cost next to nothing at runtime, they help document your function signatures, and they help flag inconsistent assumptions in your codebase even if they’re not airtight. What’s not to like?
I would love it if it were better designed. It’s a real downer that you can’t check lots of Pythonic, concise code using it.
though maybe there's a path forward to give a variable a sort of "de-hint" in that in can be everything BUT this type(i.e. an argument can be any indexable type, except a string)
I think this is called a negation type, and it acts like a logical NOT operator. I'd like it too, and I hear that it works well with union types (logical OR) and intersection types (logical AND) for specifying types precisely in a readable way.
No, this is the great thing about gradual typing! You can use it to catch errors and provide IDE assistance in the 90% of cases where things have well-defined types, and then turn it off in the remaining 10% where it gets in the way.
This is a good way of expressing my own frustration with bolting strong typing on languages that were never designed to have it. I hate that TypeScript has won out over JavaScript because of this - it’s ugly, clumsy, and boilerplatey - and I’d be even more disappointed to see the same thing happen to the likes of Python and Ruby.
My background is in strongly typed languages - first C++, then Java, and C# - so I don’t hate them or anything, but nowadays I’ve come to prefer languages that are more sparing and expressive with their syntax.
Can’t you just use a typing.Protocol on __getitem__ here?
https://typing.python.org/en/latest/spec/protocol.html
Something like
from typing import Protocol
class Indexable(Protocol):
def __getitem__(self, i: int) -> Self: ...
Though maybe numpy slicing needs a bit more work to supportIMO, the trick to really enjoying python typing is to understand it on its own terms and really get comfortable with generics and protocols.
That being said, especially for library developers, the not-yet-existant intersection type [1] can prove particularly frustrating. For example, a very frequent pattern for me is writing a decorator that adds an attribute to a function or class, and then returns the original function or class. This is impossible to type hint correctly, and as a result, anywhere I need to access the attribute I end up writing a separate "intersectable" class and writing either a typeguard or calling cast to temporarily transform the decorated object to the intersectable type.
Also, the second you start to try and implement a library that uses runtime types, you've come to the part of the map where someone should have written HERE BE DRAGONS in big scary letters. So there's that too.
So it's not without its rough edges, and protocols and overloads can be a bit verbose, but by and large once you really learn it and get used to it, I personally find that even just the value of the annotations as documentation is useful enough to justify the added work adding them.
Change the declaration to:
def __getitem__(self, i: int | slice)
Though to be honest I am more concerned about that function that accepts a wild variety of objects that seem to be from different domains...
I'd guess inside the function is a HUGE ladder of 'if isinstance()' to handle the various types and special processing needed. Which is totally reeking of code smell.
Although Python type hints are not expressive enough.
Mind you, I haven't used it before, but it feels very similar to the abstract Mapping types.
def lol(blarg): # types? haha you wish. rtfc you poor sod. Pytharn spirit ftw!!!
...
return omg[0].wtf["lol freedom"].pwned(Good.LUCK).figuring * outBesides, there must be some behavior you expect from this object. You could make a type that reflects this: IntIndexable or something, with an int index method and whatever else you need.
This feels like an extremely weak argument. Just think of it as self-enforcing documentation that also benefits auto-complete; what's not to love? Having an IntIndexable type seems like a great idea in your use case.
You are looking for protocols. A bit futzy to write once but for a heavily trafficked function it's woth it.
If your JIT compiler doesn't work well with protocols... sounds like a JIT problem not a Python typing problem
That's not how you are supposed to use static typing? Python has "protocols" that allows for structural type checking which is intended for this exact problem.
And you better believe that every single admissible type
This is exactly why I hate using Python.These are similar to interfaces in C# or traits in Rust - you describe what the parameter _does_ instead of what it _is_.
If that is exactly what you want, then define a Protocol: from __future__ import annotations from typing import Protocol, TypeVar
T = TypeVar("T")
K = TypeVar("K")
class GetItem(Protocol[K, T]):
def __getitem__(self, key: K, /) -> T: ...
def first(xs: GetItem[int, T]) -> T:
return xs[0]
Then you can call "first" with a list or a tuple or a numpy array, but it will fail if you give it a dict. There is also collections.abc.Sequence, which is a type that has .__getitem__(int), .__getitem__(slice), .__len__ and is iterable. There are a couple of other useful ones in collections.abc as well, including Mapping (which you can use to do Mapping[int, t], which may be of interest to you), Reversible, Callable, Sized, and Iterable.Then you can focus your tests on more interesting things
You just need to set your build up to actually do the checking as type hints by default are just documentation
from typing import Callable
class Pipeline[T]:
def __init__(self, value: T) -> None:
self._value = value
def step[U](self, cb: Callable[[T], U]) -> 'Pipeline[U]':
return Pipeline(cb(self._value))
def terminate(self) -> T:
return self._value
def _float_to_int(value: float) -> int:
return int(value)
def _int_to_str(value: int) -> str:
return str(value)
def main() -> None:
result = Pipeline(3.14)\
.step(_float_to_int)\
.step(_int_to_str)\
.terminate()
print(result)
if __name__ == '__main__':
main()
You could further constrain the generic type through type variables: https://docs.python.org/3/library/typing.html#typing.TypeVarIn my experiment I wanted to get a syntax like this:
pipeline = Pipeline()
...some code here...
pipeline.add_step(Step(...some meta data..., ...actual procedure to run...))
So then I would need generics for `Step` too and then Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.I think your solution circumvents the problem maybe, because you immediately apply each step. But then how would the generic type work? When is that bound to a specific type?
Yes, since 3.12.
> Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.
Sounds like you want a dynamic type with your implementation (note the emphasis). Types shouldn't change at runtime, so a type checker can perform its duty. I'd recommend rethinking the implementation.
This is the best I can do for now, but it requires an internal cast. The caller side is type safe though, and the same principle as above applies:
from functools import reduce
from typing import cast, Any, Callable, Mapping, TypeVar
def _id_fn[T](value: T) -> T:
return value
class Step[T, U]:
def __init__(
self,
metadata: Mapping[str, Any],
procedure: Callable[[T], U],
) -> None:
self._metadata = metadata
self._procedure = procedure
def run(self, value: T) -> U:
return self._procedure(value)
TInput = TypeVar('TInput')
class Pipeline[TInput, TOutput = TInput]:
def __init__(
self,
steps: tuple[*tuple[Step[TInput, Any], ...], Step[Any, TOutput]] | None = None,
) -> None:
self._steps: tuple[*tuple[Step[TInput, Any], ...], Step[Any, TOutput]] = (
steps or (Step({}, _id_fn),)
)
def add_step[V](self, step: Step[TOutput, V]) -> 'Pipeline[TInput, V]':
steps = (
*self._steps,
step,
)
return Pipeline(steps)
def run(self, value: TInput) -> TOutput:
return cast(
TOutput,
reduce(
lambda acc, val: val.run(acc),
self._steps,
value,
),
)
def _float_to_int(value: float) -> int:
return int(value)
def _int_to_str(value: int) -> str:
return str(value)
def main() -> None:
step_a = Step({}, _float_to_int)
step_b = Step({}, _int_to_str)
foo = Pipeline[float]()\
.add_step(step_a)\
.add_step(step_b)\
.run(3.14)
print(foo)
bar = Pipeline[float]()\
.run(3.14)
print(bar)
if __name__ == '__main__':
main()It's always a small vocal fraction or they'd be using a different language.
However, in a large codebase, consistency can become a challenge. Different developers often approach the same problem in different ways, leading to a mix of type patterns and styles, especially when there’s no clear standard or when the problem itself is complex.
With the rise of LLM-generated code, this issue becomes even more pronounced — code quality and craftsmanship can easily degrade if not guided by proper conventions.
I enjoy packages like pydantic and SOME simple static typing, but if I’m implementing anything truly OOP, I wouldn’t first reach for Python anyway; the language doesn’t even do multiple constructors or public/private props.
Edit: as a side note, I was interested to learn that for more verbose type specification, it’s possible to define a type in variable-like syntax at the top: mytype = int|str|list|etc.
Maybe I'm missing out on something cool...
try typing a decorator, or anything using file IO
I find it extremely difficult, if not impossible, and I did type theory
(the type checkers being really, really stupid doesn't help either)
If you care about micro-optimizations, the first one that overwhelms everything else is to not use Python.
Anyway, if your types are onerous, you are using them wrong. Even more in a progressive type system where you always have the option of not using them or putting an "Any" there.
You are consistently showing you ignore how to use types and what they can do. Nobody is required to know about everything, but keep in mind that this is a lack from your part to understand it, not from the language.
However, if I had a choice, rather than use typehints in python, I would much rather just use a statically typed language. Short, tiny scripts in python? Sure. Anything that grows or lives a long time? Use something where the compiler helps you out.
https://blog.codingconfessions.com/i/174257095/lowering-to-c...
It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.
Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).
Ruby particularly is already strongly typed so there isn't too many suprises with automatic conversions or anything like that. RBS also just makes metaprogramming more annoying - and Ruby's ability to do metaprogramming easily is one of its biggest strengths in my opinion.
If I wanted a statically typed language I would just use a statically typed language.
Honestly the only people I see who really push back against it are the people who haven't bothered learning it. Once people use it for a bit, in my experience at least, they don't want to go back.
Years of design also went into Ruby's type system, and for the people that enjoy it - be my guest - but I would never use it for my own code.
* Always strongly type, for local variables, method parameters, and return types
* Avoid Any unless absolutely required
* hasattr() and get() are often code smells; if the type can be known, use that type
* Use beartype for all methods.
I _love_ beartype and want to plug it to everyone on HN: https://github.com/beartype/beartype
I'm building my own coding agent, like Claude, and it is built with opinionated style. Strongly typing Python and using beartype are what it will try to do unless the user specifies otherwise.
I want to believe that corrected typed python code is easier for smaller models to generate / interact with, but who knows how the trade-offs actually work out.
This also works for humans, but many python programmers who learned python before type hints can't be bothered. :sad_panda:
For PHP it slowly got introduced in php5.4 and now it's expected to type hint everything and mark the file strict with linters complaining left and right that "you're doing a bad job by having mixed-type variables"
In Ruby you get Sorbet or RBS.
What is JavaScript? Oh, you mean TypeScript.
and so on ..
My take is that if you need strong types switch to a language with strong types so you can enjoy some "private static final Map<String, ImmutableList<AttemptType>> getAttemptsBatches(...)"
LoganDark•4mo ago