The itertools package.
never used it but it seems worth a try
Metaclasses are however quite complex (or at least lead to complex behavior) and I mostly avoid them for this reason.
And 'Proxy Properties' are not really a feature at all. Just a specific usage of dunder methods.
just my 2 ct
What’s wrong about learning things by looking at code from more experienced people?
I would add assert_never to the pattern matching section for exhaustiveness checks: https://typing.python.org/en/latest/guides/unreachable.html#...
If you are really interested in "advanced Python", though, I would recommend the book Fluent Python by Ramalho. I have the first edition which is still highly relevant, including the async bits (you just have to translate the coroutines into async syntax). There is a second edition which is more up to date.
I would also recommend checking out the functools[0] and itertools[1] modules in the standard library. Just go and read the docs on them top to bottom.
It's also worth reading the first few sections of Python Data Model[2] and then bookmarking this page.
[0] https://docs.python.org/3/library/functools.html
The only problem I’ve found is that for interviews, people often aren’t familiar with it, which can lead to you solving whatever puzzle they had in far less time than they intended, and without manually building whatever logic it was they assumed you would need.
My favourite example of a similar thing happening to me was when I was asked to reverse the digits in a number. I somewhat jokingly asked if I was assuming base 10, which got some awkward looks so I knew something was up. They weren't impressed at all with my answer of `"".join(reversed(str(123456789)))`. I didn't get the job.
Their loss. Expecting an arithmetic solution to be necessary is incongruous with accepting the overhead of Python. IIRC your approach will actually be faster; the string conversion implicitly number-crunches on the bare metal, whereas any arithmetic approach in Python code won't.
Unless they wanted you to convert back to int afterwards. Or wanted you to reverse the string with the slicing trick (which is, indeed, quite a bit faster yet).
One personal favorite of mine is __all__ for use in __init__.py files. It specifies which items are imported whenever uses from x import *. Especially useful when you have other people working on your codebase with the tendency to always import everything, which is rarely a good idea.
Playing with the typesystem and generics like this makes me worry I'm about to have a panic attack.
Give me code that I can understand and debug easily, even when I didn't write it; don't do implicit magical control flow changes unless you have a very good excuse, and then document both the how and the why - and you'll get a product that launches earlier and has fewer bugs.
Sometimes, a few more if statements here and there make code that is easier to understand, even if there's a clever hack that could cut a line or two here and there.
too many overloads can be a code smell IMHO, it's ok when you are implementing a very common pattern and want proper types (decorators come to mind, so that both @decorator and @decorator() work, where the decorator might also have optional args) but I think in cases like the example in the article it should almost always be two separate functions
When I code, I try to make everything I write not clever. I am not saving a byte here and there because I am not typing in a program from a magazine in the 1980s (I was there, I am not going back). No code golf. I did my time on the Timex-Sinclair with its miserable two kilobytes of memory and reserved keywords as special-function strokes at a character each.
Each line should do one thing, in general, and it ought to be obvious. Cleverness is held in reserve for when it is truly needed, namely data structures and algorithms.
One of my accomplishments which seems to have mattered only to me is when my apartment-finding/renting system, written entirely in Perl, was transferred into the hands of students, new programming students. Perl is famously "write-once, read-never" and seems to have a culture favoring code golf and executable line noise. Still, the students got back to me and told me how easily-ported everything was, because I had done just one thing per line, avoided $_ and other such shortcuts, and other practices. They were very happy to take it over because I had avoided being cryptic and terse.
# setup
yield resource.
# teardown
Not that it makes it any less magical, but at least it's a consistent python patternThe main one is that it makes error handling and clean-up simpler, because you can just wrap the yield with a normal try/catch/finally, whereas to do this with __enter__ and __exit__ you have to work out what to do with the exception information in __exit__, which is easy to get wrong:
https://docs.python.org/3/reference/datamodel.html#object.__...
Suppressing or passing the exception is also mysterious, whereas with contextlib you just raise it as normal.
Another is that it makes managing state more obvious. If data is passed into the context manager, and needs to be saved between __enter__ and __exit__, that ends up in instance variables, whereas with contextlib you just use function parameters and local variables.
Finally, it makes it much easier to use other context managers, which also makes it look more like normal code.
Here's a more real-world-like example in both styles:
https://gist.github.com/tomjnixon/e84c9254ab6d00542a22b7d799...
I think the first is much more obvious.
You can describe it in english as "open a file, then try to write a header, run the code inside the with statement, write a footer, and if anything fails truncate the file and pass the exception to the caller". This maps exactly to the lines in the contextlib version, whereas this logic is all spread out in the other one.
It's also more correct, as the file will be closed if any operation on it fails -- you'd need to add two more try/catch/finally blocks to the second example to make it as good.
def f(i=0) -> None:
j = i + 1
k = 1
reveal_type(i)
reveal_type(j)
reveal_type(k)
Output: Revealed type is "Any"
Revealed type is "Any"
Revealed type is "builtins.int"
int -> int
Is wrong. At minimum it’s:
Optional[int] -> int
Because you provided a default value so clearly it’s not required to provide an input parameter. It’s also wrong to assume `0` is an int. There’s other valid types it could be. If the default was say `42`, I’d be pushing back a little less (outside of the Optional part), but this contrived example from GP had 0, which is ambiguous on what the inferred typing must be.
def f(i=0) -> None:
reveal_type(i)
The inferred type is not `float` nor `int`, but `Any`. Mypy will happily let you call `f("some string")`.Pyright correctly deduces the type as int.
In any case it's a bad example as function signatures should always be typed.
> # fun x -> x + 1;;
> - : int -> int = <fun>
>
2) inferring the type is int isn’t guaranteed to be correct in this case
No it’s not. It’s Optional[int] -> int at minimum. There are other completely valid signatures beyond that too.
That's the downside of operator overloading - since it relies on types to resolve, they need to be known and can't be inferred.
It could be:
def f(i=0) -> None:
if i is None:
do_something()
else:
do_something_else()
Yeah, I know it's retarded. I don't expect high quality code in a code base missing type annotation like that. Assuming `i` is `int` or `float` just makes incrementally adoption of a type checker harder.I mentioned pyright because (some of) the specific concerns by OP are addressed by it.
Pyright probably works if you use it for a new project from the start or invest a lot of time "fixing" an existing project. But it's a totally different tool and it's silly to criticise mypy without understanding its use case.
I tried Pyright but as you say on an existing project you need a looot of time to "fix" it.
You don’t know but you are addicted to types
Come to the light - Haskell!
Or embrace logic + functional programming: Curry. https://curry-language.org/
I needed the array indices to be int64 and specified them as such during initialization.
Downstreams, however, it would look at the actual index values and dynamically cast them to int32 if it judged there would be no loss in precision. This would completely screw up the roundtrip through a module implemented in C.
Being an intermittent bug it was quite a hell.
The more fancy stuff you add to it, the less attractive it becomes. Sure, most of these things have some sort of use, but I reckon most people do not get deep enough into python to understand all these little things.
That breaks down as soon as you need to work with anyone else's code that uses the "fancy stuff".
Of course, Lua is batteries-not-included so there may be the problem of "progress" in external libraries; in practice things like Penlight barely change though.
* did you know __init__.py is optional nowadays?
* you can do relative imports with things like "from ..other import foo"
* since 3.13 there is a @deprecated decorator that does what you think it does
* the new generics syntax also works on methods/functions: "def method[T](...)" very cool
* you can type kwargs with typeddicts and unpack: "def fn(*kwargs: Unpack[MyKwargs])"
* dataclasses (and pydantic) support immutable objects with: "class MyModel(BaseModel, frozen=True)" or "@dataclass(frozen=True)"
* class attributes on dataclasses, etc. can be defined with "MY_STATIC: ClassVar[int] = 42" this also supports abstract base classes (ABC)
* TypeVar supports binding to enforce subtypes: "TypeVar['T', bound=X]", and also a default since 3.13: "TypeVar['T', bound=X, default=int]"
* @overload is especially useful for get() methods to express that the return can't be none if the default isn't None
* instead of Union[a, b] or Optional[a] you can write "a | b" or "a | None" nowadays
* with match you can use assert_never() to ensure exhaustive matching in a "case _:" block
* typing has reveal_type() which lets mypy print the type it thinks something is
* typing's "Self" allows you to more properly annotate class method return types
* the time package has functions for monotonic clocks and others not just time()
anyone know more things?
It has an effect, and is usually worth including anyway. I used to omit it by default; now I include it by default. Also, you say "nowadays" but it's been almost 13 years now (https://peps.python.org/pep-0420/).
> since 3.13 there is a @deprecated decorator that does what you think it does
Nice find. Probably worth mentioning it comes from the `warnings` standard library.
> the time package has functions for monotonic clocks and others not just time()
There's quite a bit in there, but I question how many people need it.
Anyway, it's always surprising to me how when other people make these lists, such a large fraction is taken up by tricks with type annotations. I was skeptical of the functionality when the `typing` standard library was introduced; I've only grown more and more wary of it, even as people continue to insist to me that it's somehow necessary.
It's not optional. Omitting it gets you a namespace package, which is probably not what you want.
> TypeVar supports binding to enforce subtypes: "TypeVar['T', bound=X]",
Using the new generics syntax you mentioned above you can now do:
def method[T: X](...)
If you want to help, there's a section on the Python forum (https://discuss.python.org/c/documentation/26) and a Discord server, and issues with documentation can also be reported on the main Python GitHub issue tracker (https://github.com/python/cpython/labels/docs).
If only. I suspect very few Python programmers can even fully explain what `a + b` does.
If `a` and `b` are instances of classes, many would say it's equivalent to `a.__add__(b)` or `type(a).__add__(a, b)`, but in fact it's much more complex.
- There are situations where `__radd__` takes priority over `__add__`. The rules for determining that priority are complex (and IIRC subtly different from the rules that determine whether `a < b` prioritises `a.__lt__(b)` or `b.__gt__(a)`).
- The lookup of `__add__` etc uses a special form of attribute lookup that's neither equivalent to `a.__add__` nor `type(a).__add__`. This special lookup only searches `type(a)` whereas the first would find an `__add__` function on `a`, and the second on `type(type(a))`.
I've also heard of further complications caused by implementation details leaking into the language semantics - for example, see Armin Ronacher's blog post: https://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-t...
>>> a+1 # lookup on class
1+1
2
>>> a.__add__(1) # instance method
0
2. There is __radd__ that is called if __add__ doesn't support given types (for different types).If a and b are lists, the latter modifies the existing list (which may be referenced elsewhere) instead of creating a new one.
I think Python is the only language I've encountered that uses the + operator with mutable reference semantics like this. It seems like a poor design choice.
>>> a = b = [1, 2, 3] >>> a = b = [1, 2, 3]
>>> a = a + [4] >>> a += [4]
>>> a, b >>> a, b
([1, 2, 3, 4], [1, 2, 3]) ([1, 2, 3, 4], [1, 2, 3, 4])
What's worse is that sometimes, they are equivalent: >>> a = b = (1, 2, 3) >>> a = b = (1, 2, 3)
>>> a = a + (4,) >>> a += (4,)
>>> a, b >>> a, b
((1, 2, 3, 4), (1, 2, 3)) ((1, 2, 3, 4), (1, 2, 3))
And even worse, in order to support a version of `a += b` that sometimes modifies `a` (e.g. with lists), and sometimes doesn't (with tuples), the implementation of the `+=` operator is convoluted, which can lead to: >>> t = ([1, 2, 3], ['a'])
>>> t[0] += [4]
TypeError: 'tuple' object does not support item assignment
>>> t
([1, 2, 3, 4], ['a'])
The operation raises a TypeError, despite having succeeded!Using it since version 1.6.
I do with the concept of truthiness and falsiness would be taken out and shot though. It's been responsible for far too many nasty bugs IME and it only cuts out a few extra characters. Not a great trade off.
Then everybody loses their minds because the system suddenly started doing something it was never supposed to.
On the python bug tracker somewhere a while back there was a guy who wrote "if time_variable:" which resolved to false but only when it was called at midnight. It was initally resolved as wontfix.
(midnight being the datetime's conceptual equivalent to empty string, 0, empty list, etc.)
We need a unary "truthy" operator so that code remains compact and readable without the surprising behavior.
For myself, I settled on these patterns for various kinds of tests:
# check for None
if x is not None: ...
# check for empty string
if x != "": ...
# check for empty collection other than string
if not len(x): ...
In this last case I rely on 0 being falsy to make this idiom distinct from checking length for a specific value via equality. So that when I'm scanning the code later and I see "not len", I know right away it's an empty check specifically.But, of course, this only works if you're doing it consistently. And since the language doesn't enforce it, and there's no idiomatic Python standard for it, the footgun remains...
I don't think the problem is that empty strings are falsy per say (although that might not be to your personal preference). Rather I think the problem is the implicit coercion to bool, hence my desire for a unary operator.
I think this is yet another entry in the (extremely) long list of examples of why implicit type conversions are a bad thing.
Let me correct that. The problem isn't so much that they are falsy per se, but rather that they share this property with so many unrelated things - and this is then combined with dynamic typing, so any given `x` can be any of those things. None/'' in particular is exceedingly common because None is the standard way to report absence of a value in Python.
As far as having a symbolic unary operator for such checks, I think that would go stylistically contrary to the overall feel of Python syntax - I mean, this is a language in which you have to spell out things like "not", and len() is also a function unlike say Lua. It feels like the most Pythonic interface for this would be as an instance property with a descriptive name.
I write an unfortunate amount of Go code.. because Go supports 2(?) of the features in this post (structural typing and generics, sort of, if you're feeling generous).
For example, circular imports aren't supported in Go at all. I run into this from time to time even if using an interface sometimes consts or other types are defined in the package as well and the whole thing has to be refactored. No way around it, has to be done.
Circular imports aren't encouraged in Python but Python never leaves you without options. Code can import types only while type checking or move imports out of the module level are quick workarounds. (I hope for typescripts `import type` someday.) That's what gives me a "comfort" feeling in Python, knowing that the language isn't likely to force me to work around the language design.
I think this list should also include descriptors[0]: it's another metaprogramming feature that allows running code when accessing or setting class attributes similar to @property but more powerful. (edit: nvm, I saw that they are covered in the proxy properties section!)
I think the type system is quite good actually, even if you end up having to sidestep it when doing this kind of meta-programming. The errors I do get are generally the library's fault (old versions of SQLAlchemy make it impossible to assign types anywhere...) and there's a few gotchas (like mutable collections being invariant, so if you take a list as an argument you may have to type it as `Sequence[]` or you'll get type errors) but it's functional and makes the language usable for me.
I stopped using Ruby because upstream would not commit on type checking (yes I know you have a few choices if you want typing, but they're a bit too much overhead for what I usually use Ruby for, which is writing scripts), and I'm glad Python is committing here.
I don't know about code inside companies, but most new open source projects I encounter use typing. Many old ones have been converted too.
> Is duck typing frowned upon?
No. You can use Protocols to define what shape you expect your duck to be. There's some discussion about whether you should use abstract classes or protocols, though.
https://blog.edward-li.com/tech/advanced-python-features/#2-...
def bar(a, /, b):
...
# == ALLOWED ==
bar(1, 2) # All positional
bar(1, b=2) # Half positional, half keyword
# == NOT ALLOWED ==
bar(a=1, b=2) # Cannot use keyword for positional-only parameter
https://docs.python.org/3.12/reference/compound_stmts.html#f...
I did not expect to wake up at 4am seeing my post on front page HN, but here we are nevertheless :D
As the intro mentioned, these started off as 14 small tweets I wrote a month prior to starting my blog. When I finally got that set up, I just thought, "hey, I just spent the better part of two weeks writing these nifty Python tricks, might as well reuse them as a fun first post!"
That's why the flow might seem a little weird (as some pointed out, proxy properties are not really a Python "feature" in of itself). They were just whatever I found cool that day. I tried to find something more esoteric if it was a Friday, and something useful if it was a Monday. I was also kinda improving the entire series as it was going on, so that was also a factor.
Same goes with the title. These were just 14 feature I found interesting while writing Python both professionally and as a hobby. Some people mentioned these are not very "advanced" per se, and fair enough. I think I spent a total of 5 second thinking of a title. Oh well!
One I think you missed is getters and setters on attributes!
Congratulations for ending up on the front page! (I hope the server hosting your blog is okay!)
But I’m surprised to read you went from idea to posting each day. I had expected maybe a week of preparation before starting execution.
Regardless it’s a great post, regardless of wether one think there is too much typing or too many new features in modern Python or one is a lover of niche solutions to shorten code.
'''
# ===== Don't write this =====
response = get_user_input()
if response:
print('You pressed:', response)
else: print('You pressed nothing')
# ===== Write this instead =====if response := get_user_input():
print('You pressed:', response)
else: print('You pressed nothing')
'''The first implementation is immediately clear, even if you're not familiar with Python syntax. If you don't know what the ":=" operator does, the code becomes less readable, and code clarity is traded away in favor of being slightly more concise.
'''
iterable = iter(thing)
while val := next(iterable, None): print(val)
'''
is a lot cleaner in my opinion compared to
'''
iterable = iter(thing)
val = next(iterable, None) while val is not None: print(val) val = next(iterable, None)
'''
Reason why I did not use this example outright was because I wasn't sure if people were familiar with the iter api, so I just chose a simpler example for the blog.
First of all, it takes a minute to search "python :=", and the construct itself is pretty simple. It's been part of the language since 2018[0]. I don't think "not knowing the language" is a good reason to avoid it.
Second, the walrus operator limits the variable's scope to the conditional, which can reduce certain bugs. It also makes some scenarios (like if/elif chains) clearer.
I recommend checking out the PEP for some real-world examples.
[0] https://peps.python.org/pep-0572/
Edit: my point about walrus scoping is incorrect. The new variable is function-scoped.
Nope! It's still function-scoped.
In Python, walrus or no walrus, the body of a conditional is never a separate scope.
“I don’t want to use windowing functions in SQL, because most people don’t know what they are.” So you’d rather give up an incredibly powerful part of your RDBMS, and dramatically increase the amount of bandwidth consumed by your DB?
It’s as if the industry is embracing people who don’t want to read docs.
Don't use `match`, macros, lifetimes, ... in rust, someone coming from another language without them might not get what it means. Instead write the equivalent C-looking code and don't take advantage of any rust specific things.
Don't use lisp, someone coming from another language might not be able to read it.
Etc..
At one point if you write code and want to be productive, you need to accept that maybe someone that is not familiar with the language you're using _might_ have to look up syntax to understand what's going on.
Although I think the example of type alises in section 4 is not quite right. NewType creates a new "subtype" which is not equivalent to the original type. That's different to TypeAlias, which simply assigns a name to an existing type. Hence NewType is still useful in Python 3.12+.
I don't believe any reasonable person would call Python statically typed, it just now has a pathway though which one can send additional documentation, with all of its caveats
python3.13 -c '
foo: dict[str, list] = "lol"
print(foo.keys())
'
And, yes, I am aware of the chorus of "just sprinkler more linters"I am coding in all 4, with some roughly 28 years now, and I don't like what is becoming of Python
There is a reason why python become that popular and widely adapted and used, and it is not the extra layers of type checking, annotations and the likes.
this looks familiar to me, but from other languages.
response := get_user_input()
I am aware of the factI am in minority, and not trying to change anyone's mind, simply what this voice to be heard from time to time.All in all, a very comprehensive list of some of the recent introduced features.
There is an older list on SO which readers might also find useful:
https://stackoverflow.com/questions/101268/hidden-features-o...
To be clear, I'm not expecting people to start adding generics to their quick hacked together Python scripts (in fact please don't do that). Instead, if you're building a library or maintaining a larger Python codebase, a lot of these start becoming very useful. A lot of the typing features I mentioned are already used by Python under the hood, and that a lot of Python developers just take for granted.
Case in point, the python-opencv (https://github.com/opencv/opencv-python) library has basically no types and it's an absolute pain to work with.
BTW thats a really good SO thread, thanks for linking it!
Python got popular and widely adopted for the same reason PHP did: 1) it was readily available everywhere and 2) it's an easy language for beginners to pick up and get real applications working quickly.
But it turns out that the language features desirable for a quick prototype done by your fresh-out-of-college founding engineer aren't the same as the language features desirable for your team of 100 engineers trying to work together on the same codebase.
Python is at an odd intersection where it's used by everything from large teams building robust backend applications (particularly in the data/ML space), to data scientists hacking on scripts interactively in Jupyter, to ops people writing deployment scripts. You don't need type checking and annotations if you're writing a few one-off scripts, but you'd be crazy to not take advantage of them for a larger application.
Up to now python emphasizes to be and remain the dynamically typed monkey patch happy core language it is, with the completely volunteer option to provide type hints and use them to your advantage as you see fit.
So you can hack away and monkey patch to your heart's content and nothing is taken from you. no rust borrow checker. no need to use a type checker. ducks everywhere.
and I'm not aware of features that have to be used, aka incompatible changes to the core language.
So what is the critique, exactly?
With the type system my opinion on that has changed. We have a pretty large tooling codebase with protocol stacks, GUI, test automation etc. and it's all very maintainable with the type checking.
I'll be honest, I've never understood this language feature (it exists in several languages). Can someone honestly help me understand? When is a function with many potential signatures more clear than just having separate function names?
Thus the (+) operator for addition is "overridden" or "polymorphic" in the types of numbers that can be added together.
The argument for having a polymorphic signature rather than just multiple separate "monomorphic" functions is similar to that for "generics," otherwise known as "parametric polymorphism": why not just have a function `forEachInt` for iterating over lists of ints, a separate function `forEachChar` for iterating over lists of characters, and so on?
Higher levels of abstraction and generality, less boilerplate and coupling to any particular choice of data structure or implementation.
You could of course go the route of golang which indeed just had you write "monomorphized" versions of everything. Several years later generics were added to the language.
Alternatively, you throw everything out and never have to worry about typing or polymorphism, at the cost of static safety.
1. Typing overloads: TS has typed overloads I think largely as an affordance to an unfortunate feature of Javascript. In my experience overloads are an anti-pattern or at best code smell. It's nice that you can type them if you're cleaning up an existing codebase that uses them but I would consider them tech debt.
2. Keyword-only and Positional-only Arguments: This is the opposite of the 1st feature (ability to make method signatures more strict) but man is the syntax cryptically terse. I'd love to use this everywhere but I'd be concerned about readability.
3. Future Annotations: Thank you for this section - forward references have been a real pain for me recently & this is the first explanation that's scratches the service of the "why" (rather than just focusing on case-by-case solutions), which is much more helpful. Bring on PEP 649.
4. Generics: Cries in legacy 3.10 codebase
5. Protocols: As a Typescript guy, this seems very cosy & familiar. And not very Pythonic. I'm not sure how to feel.
14. Metaclasses:
> if you are that 1% which has a unique enough problem that only metaclasses can solve, they are a powerful tool that lets you tinker with the internals of the Python object system.
OR if you're one of the many devs that believes their problem is unique & special & loves to apply magical overengineered solutions to simple common problems, then the next person who inherits your code is going to really love your metaclasses. They sure make tracing codepaths fun.
Isn't it super pythonic? One of the first things you learn about Python is that "everything is duck typed", but then the type system is primarily nominally typed. It seems like Protocols should have been there from the start, like Typescript interfaces.
I'm new to Python & what is or isn't "pythonic" doesn't seem very intuitive to me (beyond just reading PEPs all day) - I guess I'm speaking from using the current type system & the idea of nominal types coexisting with structural seems a little disjointed.
Several comments disliking the walrus operator, like many of the features on this list I also hated it… until I found a good use for it. I almost exclusively write strictly typed Python these days (annotations… another feature I originally hated). The walrus operator makes code so much cleaner when you’re dealing with Optionals (or, a Union with None). This comes up a lot with regex patterns:
if (match := pattern.search(line)) is not None:
print(match.group())
Could you evaluate match on a separate line before the conditional? Sure. But I find this is a little clearer that the intended life of match is within the conditional, making it less tempting to reuse it elsewhere.Not a Python feature specifically, but I’d also love to see code that uses regex patterns to embrace named capturing groups more often. .group(“prefix”) is a lot more readable than .group(1).
- multi subs & methods = "typing overloads"
- named & positional args
- stubs = "future annotations"
- subsets = "Generics"
- protocols
- wrappers = "context managers"
- given / when = "structural pattern matching"
- my declaration = "walrus op"
- short circuit evaluation
- operator chaining
- fmt
- concurrency
If you are a Python coder and you feel the need for some of this, I humbly suggest you take a look at https://raku.org
TekMol•2mo ago
macleginn•2mo ago
wesselbindt•2mo ago
Loranubi•2mo ago
stevesimmons•2mo ago
int_19h•2mo ago
IMO a better design would be to have a block that always executes at the end of the loop - there's even a reasonable keyword for it, `finally` - but gets a boolean flag indicating whether there was a break or not:
Or better yet, make `break` take an optional argument (which defaults to `True` if unspecified), and that's what you get in `finally`. So this could be written:hk__2•2mo ago
TekMol•2mo ago
zahlman•2mo ago
sowhat25•2mo ago
zahlman•2mo ago
And yet somehow I keep forgetting about it in the exact moments when it would probably make my life easier.
nostoc•2mo ago
zahlman•2mo ago
paolosimone•2mo ago
HelloNurse•2mo ago