I think maybe a more realistic example there would be people using splatting without realizing/internalizing that it performs a full copy, e.g.
xs = [1, *ys]
Another one that stood out was (3). Slots are great, but >95% of the time I'd expect people would want to use `slots=True` with dataclasses instead of manually writing `__slots__` and a constructor like that. `slots=True` has worked since Python 3.10, so every non-EOL version of Python supports it.You can directly produce a modified copy, rather than using a mutating operation to implement the modifications.
It should be noted that "return a modified copy" algorithms can be much more efficient than "mutate the existing data" ones. For example, consider the case of removing multiple elements from a list, specified by a predicate. The version of this code that treats the input as immutable, producing a modified copy, can perform a single pass:
def without(source, predicate):
return [e for e in source if not predicate(e)]
whereas mutating code can easily end up with quadratic runtime — and also be difficult to get right: def remove_which(source, predicate):
i = 0
while i < len(source):
if predicate(source[i]):
# Each deletion requires O(n) elements to shift position.
del source[i]
else:
# The index increment must be conditional,
# since removing an element shifts the next one
# and that shifted element must also be considered.
i += 1Or if you do care about order, you can emulate the C++ "erase-remove" idiom, by keeping track of separate "read" and "write" positions in the source, iterating until "read" reaches the end, and only incrementing "write" for elements that are kept; and then doing a single `del` of a slice at the end. But this, too, is complex to write, and very much the sort of thing one chooses Python in order to avoid. And you do all that work, in essence, just to emulate what the list comprehension does but in-place.
It will also probably be significantly slower than just copying the vector.
In general I feel like these kind of benchmarks might change for each python version, so some caveats might apply.
> modify[ing] objects in place […] improves performance by avoiding the overhead of allocating and populating new structures.
AFAIK the poor performance of list copies (demonstrated in the article by a million-element list taking 10ms) doesn’t come from memory allocation nor from copying the contents of the list itself (in this case, a million pointers).
Rather it comes from the need to chase all of those pointers, accessing a million disparate memory locations, in order to increment each element’s reference count.
Which means, eventually, designing your data structures so you generally have two types of structures: one which isn't full of pointers, and one which mostly is.
Also, if we're going to suggest 'write it in another language' approaches, rewrite it in Golang. I detest writing in Golang but once you get the hang of things you can get to the point where your code only takes twice the time to write and 2% of the time (and memory) to run.
Totally, I'm a big fan of statically typed, compiled languages; especially when the codebase is large and/or there are a lot of contributors. I chose the Node example because I feel like it offers the same "ease-of-use" that draws people to Python.
> get to the point where your code only takes twice the time to write and 2% of the time (and memory) to run.
100%. Sometimes this matters, sometimes it doesn't, but if we're talking about "smart performance hacks" this is definitely a top contender.
I work on a Python project and I really wish that it supported multi-threading. If I rewrote it, I would prioritize that feature in the target language.
Some of these are pretty nice python tricks though.
Hack 1: Don't Use The Obviously Wrong Data Structure For Your Problem!
Hack 2: Don't Have The Computer Do Useless Stuff!
Hack 3: Don't Allocate Memory When You Don't Need To!
And now, a word from our sponsor: AI! Use AI to help AI build AI with AI, now with 15% more AI! Only with AI! Ask your doctor if AI is right for you!
It's worth pointing out that a few of them are Python-specific. Compilers can inline code, there's usually no need to manually inline functions in most languages, that's Python being Python. Which scope the function is from being important is quintessentially Python being Python.
The major gains in Python come from... not using Python. Essentially you have to rewrite your code around the fact that numpy and pandas are the ones really doing the work behind the curtain (e.g. aggressively vectorize, use algorithms that can use vectorization well rather than "normal" ones). Number 8 of the list hints at that.
But Python has a few interesting features that can easily get you big wins, like generators, e.g. https://www.dabeaz.com/generators/Generators.pdf
A friend of mine asked me to translate some (well-written) number theory code a while back, I got about a 250x speedup just doing a line by line translation from Python to Julia. But the problem was embarrassingly parallel, so I was able to slap on an extra 40x by tossing it on a big machine for a few hours for a total of 10,000x. My friend was very surprised – he was expecting around a 10x improvement.
I ended up rewriting the whole thing in Rust (my first Rust project) solely because I noticed that just that simple process - "get some bytes from the network, write them to this file descriptor, update the progress bar's value" was churning my CPU due to how intensive it was for the progress bar to update as often as it was - which wasn't often.
Because of how ridiculous it was I opted to rewrite it in another language; I considered golang but all of the progress bar libraries in Golang are mediocre at best, and I liked the idea of learning more Rust. Surprise surprise, it's faster and more efficient; it even downloads faster, which is kind of ridiculous.
An even crazier example: a coworker was once trying to parse some giant logfile and we ended up nerd-sniping ourselves into finding ways to speed it up (even though it finished while we were doing so). After profiling this very simple code, we found that 99% of the time in processing each line was simply parsing the date, and 99% of that was because Python's strptime is devoted to being able to parse timezones even if the input you're giving it doesn't include one. We played around with things like storing a hash map of "string date to python datetime" since there were a lot of duplicates, but the fastest method was to write an awful Python extension that basically just exposed glibc's strptime so you could bypass Python's (understandably) complex tz parsing. For the version of Python we were using it made parsing hundreds of thousands of dates 47x faster, though now in Python3 it's only about 17x faster? Maybe less.
https://github.com/danudey/pystrptime
I still use Python all the time because usually the time I save writing my code quickly more than outweighs the time I spend having slower code overall; still, if your code is going to live a while, maybe try running it through a profiler and see what surprises you can find.
If you want an actual performance improvement in Python code that most people wouldn't necessarily expect: consider using regexes for even basic string parsing if you're doing a lot of it, rather than doing it yourself (e.g. splitting strings, then splitting those strings, etc.); while regexes "feel" like they should be more complicated and therefore slower or less efficient, the regex engine in Python is implemented in C and there's a decent chance that, with a little tweaking, even simple string processing can be done faster with a regex. Again only important in a hot loop, but still.
For a list, the only way to implement it is by iterate through it (see the `list_contains` function in the CPython code).
But for the special `range` object, it can implement the `__contains__` efficiently by looking at the start/stop/step (see the `range_contains` source code).
Although the Hack 1 is for demonstration purpose, in most cases you can just do `999999 in range(1000000)`.
In my test, the same `999999 in foo` is 59.1ns for the range object, 27.7ns for the set, 6.7ms for the list. The set is the fastest, except converting the range object to the set takes 21ms.
I remember the day I realized how much I dislike python. I have never had it click for me, despite writing it in and off since the python 2.0. There are always some new arbitrary places for it to bite you, and it always feels a little yucky. And then one day I saw something like this:
# b is not defined here
if blahblah():
b = gronk()
# might raise an exception
do_stuff(b)
That tingles in all the wrong places. Ruby has other issues, but the core is still feels elegant. I still prefer scheme, though.Edit: and maybe someone can explain to me why anyone would make it so that the simplest way to iterate through a collection is not the fastest? This is the case for most languages, but it still feels dumb. Just allow a slightly more obtuse syntax like
for a in b<list>
and make that case just as fast as doing it with a while loop. The iterator protocol is imho for when we frankly don't know/care what we are iterating over, or when we elwant to be generic, or when the data structure can't expose the most efficient way of doing it (like a tree). There is no reason why for a in b: should be slower than while when b is a list or string.The hack for using Math instead of operators seems dumb.
That said, agreed that this is weird. Python makes some really unintuitive choices regarding scope.
if input() == "dynamic scope?":
defined = "happyhappy"
print(defined)In fact, creating a set takes longer than copying a list since it requires hash insertion, so it's actually much faster to do the opposite of what they suggest for #1 (in the case of a single lookup, for this test case).
Here's the results with `big_set = set(big_list)` inside the timing block for the set case:
List lookup: 0.013985s
Set lookup: 0.052468s import random
import time
def timeit(func, _list, n=1000):
start = time.time()
for _ in range(n):
func(_list=_list )
end = time.time()
print(f"Took {end-start} s")
return
def lstsearch(_list):
sf = random.randint(0,len(_list))
if sf in _list:
return
return
def setsearch(_list):
sf = random.randint(0, len(_list))
if sf in set(_list):
return
return
mylist = list(range(100000))
timeit(lstsearch, mylist)
timeit(setsearch, mylist)
----
Took 0.23349690437316895 s
Took 0.8901607990264893 s
tyingq•2mo ago
knowitnone3•2mo ago
guhcampos•2mo ago
postexitus•2mo ago
NoteyComplexity•2mo ago
tyingq•2mo ago
lunias•2mo ago
woodruffw•2mo ago
> Even if you call into fast code from Python you still have to contend with the GIL which I find very limiting for anything resembling performance.
It depends. A lot of native extension code can run without the GIL; the normal trick is to "detach" from the GIL for critical sections and only reconnect to it once Python needs to see your work. PyO3 has a nice collection of APIs for holding/releasing the GIL and for detaching from it entirely[1].
[1]: https://docs.rs/pyo3/0.27.1/pyo3/marker/struct.Python.html#m...
SAI_Peregrinus•2mo ago
lunias•2mo ago
lunias•2mo ago
> native code where performance matters, Python for developer joy/ergonomics/velocity
Makes sense, but I guess I just feel like you can eat your cake and have it too by using another language. Maybe in the past there was a serious argument to be made about the productivity benefits of Python, but I feel like that is becoming less and less the case. People may slow down (a lot) writing Rust for the first time, but I think that writing JavaScript or Groovy or something should be just as simple, but more performant, do multi-threading out of the box, and generally not require you to use other languages to implement performance critical sections as much. The primary advantage that Python has in my eyes is: there are a lot of libraries. The reason why there are a lot of libraries written in Python? I think it's because Python is the number 1 language taught to people that aren't specifically pursuing computer science / engineering or something in a closely related field.
woodruffw•2mo ago
zahlman•2mo ago
And for numerical stuff it's absolutely possible to completely trash performance by naively assuming that C/Rust/Fortran etc. will magically improve everything. I saw an example in a talk once where it superficially seemed obvious that the Rust code would implement a much more efficient (IIRC) binary search (at any rate, some sub-linear algorithm on an array), but making the data available to Rust; as a native Rust data structure, required O(N) serialization work.
lunias•2mo ago
Interesting... I didn't know that. So they should be able to get similar results in Python then?
> absolutely possible to completely trash performance by naively assuming
Yeah, of course we'd need a specific benchmark to compare results. It totally depends on the problem that you're trying to solve.
zahlman•2mo ago
I'm making PAPER (https://github.com/zahlman/paper) which is intended to prove as much, while also filling some under-served niches (and ignoring or at least postponing some legacy features to stay small and simple). Although I procrastinated on it for a while and have recently been distracted with factoring out a dependency... I don't want to give too much detail until I have a reasonable Show HN ready.
But yeah, a big deal with uv is the caching it does. It can look up wheels by name and find already-unpacked data, which it hard-links into the target environment. Pip unpacks from the wheel each time (which also entails copying the data rather than doing fast filesystem operations, and its cache is an HTTP cache, which just intercepts the attempt to contact PyPI (or whatever other specified index).
Python offers access to hard links (on systems that support them) in the standard library. All the filesystem-related stuff is already implemented in C under the hood, and a lot of the remaining slowness of I/O is due to unavoidable system calls.
Another big deal is that when uv is asked to precompile .pyc files for the installation, it uses multiple cores. The standard library also has support for this (and, of course, all of the creation of .pyc files in CPython is done at the C level); it's somewhat naive, but can still get most of the benefit. Plus, for the most part the precompiled files are also eligible for caching, and last time I checked even uv didn't do that. (I would not be at all surprised to hear that it does now!)
> It totally depends on the problem that you're trying to solve.
My point was more that even when you have a reasonable problem, you have to be careful about how you interface to the compiled code. It's better to avoid "crossing the boundary" any more than absolutely necessary, which often means designing an API explicitly around batch requests. And even then your users will mess it up. See: explicit iteration over Numpy/Pandas data in a Python loop, iterative `putpixel` with PIL, any number of bad ways to use OpenGL bindings....
lunias•2mo ago
Yeah, I get it. I see the same thing pretty often... The loop itself is slow in Python so you have APIs that do batch processing all in C. Eventually I think to myself, "All this glue code is really slowing down my C." haha
vovavili•2mo ago
fainpul•2mo ago