This isn't a bad thing; I don't think Python has to be or should be the fastest language in the world. But it's interesting to me seeing Python getting adopted for a purpose it wasn't suited for (high performance AI computing). Given how slow it is, people seem to think there's a lot of room for performance improvements. Take this line for instance:
> The free-threading interpreter disables the global interpreter lock (GIL), a change that promises to unlock great speed gains in multi-threaded applications.
No, not really. I mean, yeah you might get some speed gains, but the chart shows us if you want "great" speed gains you have two options: 1) JIT compile which gets you an order of magnitude faster or 2) switch to a static compiled language which gets you two orders of magnitude faster.
But there doesn't seem to be a world where they can tinker with the GIL or optimize python such that you'll approach JIT or compiled perf. If perf is a top priority, Python is not the language for you. And this is important because if they change Python to be a language that's faster to execute, they'll probably have to shift it away from what people like about it -- that it's a dynamic, interpreted language good for prototyping and gluing systems together.
Large pool of mediocre Python developers that can barely string a function together in my experience.
Which was true, but maybe not the strongest argument. Why not use a faster language in the first place?
But it's different now. There's huge classes of problems where pytorch, jax &co. are the only options that don't suck.
Good luck competing with python code that uses them on performance.
Well for the obvious reason that there isn't really anything like a Jupyter notebook for C. I can interactively manipulate and display huge datasets in Python, and without having to buy a Matlab license. That's why Python took off in this area, really
Can you elaborate? I've been using the Python REPL for more than two decades now, and I've never found it to be "shoddy". Indeed, in pretty much every Python project I work on, one of the first features I add for development is a standard way to load a REPL with all of the objects that the code works with set up properly, so I can inspect them.
Another example: navigating this history is done line by line instead of using whole inputs.
It's just bare minimum effort - probably gnu readline piped directly into the interpreter or something.
I think they did improve it a lot very recently by importing the REPL from some other Python interpreter but I haven't upgraded to use that version yet so I don't know how good it is now.
That is more or less how the REPL originally was implemented. I think there's more under the hood there now.
I still don't think what you describe qualifies as "shoddy". There are certainly limitations to the REPL, but "shoddy" to me implies that it's not really usable. I definitely would not agree with that.
Usually it was accompanied by saying that the time needed to write code is often more important than the time it takes to run, which is also often true.
All that said, jupyter is probably part of python's success, although I'm not the only one who actively avoids it and views it as a bit of a code smell.
It’s not impossible, but neither is it the sort of thing you want to encourage.
Furthermore writing large programs in pure assembly is not really feasible, but writing large programs in C++, Go, Rust, Java, C#, Typescript, etc. is totally feasible.
C compilers weren't up to stuff, that is why books like those from Michael Abrash do exist.
Depends what you mean, if you preclude using targeted ASM in your C I think some hot loops can be much slower than 2x.
Of course programs globally written in assembly largely don't make sense.
If you're really lucky you have a small hot part of the code and can move just that to another language (a la Pandas, Pytorch, etc.). But that's usually only the case for numerical computing. Most Python code has its slowness distributed over the entire codebase.
I recently ported a Python program to Rust and it took me much less time the second time, even though I write Rust more slowly per-line. Because I knew definitively what the program needed.
And if even that is too much optimizing the Python or adding Cython to a few hot loops is less difficult.
Porting larger programs is rarely tractable. You can tell that because several large companies have decided that writing their own Python runtimes that are faster is less effort (although they all eventually gave up on that as far as I know).
Those are actually pretty good bets, better than most other technological and business assumptions made during projects. After all, a high percentage of projects, perhaps 95%, are either short term or fail outright.
And in my own case, anything I write that is in the 5% is certain to be rewritten from scratch by the coding team, in their preferred language.
And in my experience rewrites are astonishingly rare. That's why Dropbox uses Python and Facebook uses PHP.
You could rewrite that in Rust and it wouldn't be any faster. In fact, a huge chunk of the common CPU-expensive stuff is already a thin wrapper around C or Rust, etc. Yeah, it'd be really cool if Python itself were faster. I'd enjoy that! It'd be nice to unlock even more things that were practical to run directly in Python code instead of swapping in a native code backend to do the heavy lifting! And yet, in practice, its speed has almost never been an issue for me or my employers.
BTW, I usually do the Advent of Code in Python. Sometimes I've rewritten my solution in Rust or whatever just for comparison's sake. In almost all cases, choice of algorithm is vastly more important than choice of language, where you might have:
* Naive Python algorithm: 43 quadrillion years
* Optimal Python algorithm: 8 seconds
* Rust equivalent: 2 seconds
Faster's better, but the code pattern is a lot more important than the specific implementation.
But this current quest to make Python faster is precisely because the sluggishness is noticeable for the task it's being used for most at the moment. That 6 second difference you note between the Optimal Python and the optimal Rust is money on the table if it translates to higher hardware requirements or more server time. When everything is optimal and you could still be 4x faster, that's a tough pill to swallow if it means spending more $$$.
You do understand that's a different but equivalent way of saying, "If you care about performance, then Python is not the language for you.", don't you?
https://news.ycombinator.com/item?id=45524485
Particularly this part is relevent to the Python discussion:
What is Julia's central conceit? It aims to solve "the two language" problem, i.e. the problem where prototyping or rapid development is done in a dynamic and interactive language like Python or MATLAB, and then moved for production to a faster and less flexible language like Rust or C++.
This is exactly what the speaker in the talk addresses. They are still using Julia for prototyping, but their production use of Julia was replaced with Rust. I've heard several more anecdotal stories of the exact same thing occurring. Here's another high profile instance of Julia not making it to production:
https://discourse.julialang.org/t/julia-used-to-prototype-wh...
Julia is failing at its core conceit.
So that's the question I have right now: what is Python supposed to be? Is it supposed to be the glue language that is easy to use and bind together a system made from other languages? Or is it trying to be what Julia is, a solution to the two language problem. Because it's not clear Julia itself has actually solved that.The reason I bring this up is because there's a lot of "cake having/eating" floating around these types of conversations -- that's it's possible to be all the things, without a healthy discussion of what the tradeoffs are in going that direction, and what that would me mean for the people who are happy with the way things are. These little % gains are all Python is going to achieve without actually asking the developer to sacrifice their development process in some way.
I'm not going to even go into the comp chem simulations I've been running, or that about 1/3 the stuff I do is embedded.
I do still use python for web dev, partly because as you say, it's not CPU-bound, and partly because Python's Django framework is amazing. But I have switched to rust for everything else.
No one's really developed an ecosystem for a language that's more performant that can match it, and that's all it needs to assert dominance.
A lot of slow parsing tends to get grouped in with io, and this is where python can be most limiting.
In some cases. Are looking up a single indexed row in a small K-V table? Yep, slow. Are you generating reports on the last 6 years of sales, grouped by division within larger companies? That might be pretty fast.
I'm not sure why you'd even generalize that so overly broadly.
300ms to generate a report would be able to go through ~100M rows at least (on a single core).
And the implicit assumption that comment I made earlier, of course is not about the 100M rows scan. If there is a confusion, I am sorry.
There's a metric boatload of abstractions between sending a UTF-8 query string over the packet-switched network and receiving back a list of results. 300ms suddenly starts looking like a smaller window than it originally appears.
Saying a DB query is too long by giving an arbitrary number is like saying a rope is too long. That’s solely dependent on what you’re doing with it. It’s literally impossible to say that X is too long unless you know what it’s used for.
I guess you didn't notice where he talked about running numpy?
Maybe it is just due to not being as familiar with how to properly setup a python project, but every time I have had to do something in a django or fast api project it is a mess of missing types.
How do you handle that with modern python? Or is it just a limitation of the language itself?
And you're now forced to spend time hunting down places for micro-optimizations. Or worse, you end up with a weird mix of Cython and Python that can only be compiled on the developer's machine.
This is an eternal conversation. Years ago, it was assembler programmers laughing at inefficient C code, and C programmers replying that sometimes they don’t need that level of speed and control.
Meanwhile Python is just as slow today as it was 30 years ago (on the same machine).
When compiled languages became popular again in the 2010s there was a renewed effort into ergonomic compiled languages to buck this trend (Scala, Kotlin, Go, Rust, and Zig all gained their popularity in this timeframe) but there's still a lot of code written with the two language pattern.
Those libraries didn't spring out of thin air, nor were they ever existing.
People wanted to write and interface in python badly, that's why you have all these libraries with substantial code in another language yet research and development didn't just shift to that language.
TensorFlow is a C++ library with a python wrapping. Pytorch has supported C++ interface for some time now, yet virtually nobody actually uses tensorflow or pytorch in C++ for ML R&D.
If python was fast enough, most would be fine, probably even happy to ditch the C++ backends and have everything in python, but the reverse isn't true. The C++ interface exists, and no-one is using it. C++ is the replaceable part of this equation. Nobody would really care if Rust was used instead.
> You could rewrite that in Rust and it wouldn't be any faster.
I was asked to rewrite some NumPy image processing in C++, because NumPy worked fine for 1024px test images but balked when given 40 Mpx photos.
I cut the runtime by an order of magnitude for those large images, even before I added a bit of SIMD (just to handle one RGBX-float pixel at a time, nothing even remotely fancy).
The “NumPy has uber fast kernels that you can't beat” mentality leads people to use algorithms that do N passes over N intermediate buffers, that can all easily be replaced by a single C/C++/Rust (even Go!) loop over pixels.
Also reinforced by “you can never loop over pixels in Python - that's horribly slow!”
Personally though I find it easier to just drop into C extensions at the point that NumPy becomes a limiting factor. They're so easy to do and it lets me keep the Python usability.
Tensorflow is a C++ library with python bindings. Pytorch has supported a C++ interface for some time now, yet virtually nobody uses C++ for ML R&D.
The relationship between Python and C/C++ is the inverse of the usual backend/wrapper cases. C++ is the replaceable part of the equation. It's a means to an end. It's just there because python isn't fast enough. Nobody would really care if some other high perf language took its place.
Speed is important, but C++ is even less suited for ML R&D.
Can't wait for PyPyPi.
Run as "python3 server.py -s 10000000 -n"
I'm not sure I'd ever use any other web framework than Django going forward, and I'm not using half of it (including the admin). I think Litestar is great by the way, Django is just so easy to produce with.
Hope I'm able to do the same for someone one day :)
Thank you for the work you put in.
I'm now a SWE with just a marketing degree!
Yeah, our education system sucks that much.
But just now checking out the Mega Flask Tutorial, wow looks pretty awesome.
When I started my first job as a Data Scientist, it helped me deploy my first model to production. Since then, I’ve focused much more on engineering.
You’ve truly started an amazing journey for me.
Thank you. :)
Touché! I see sibling comments assuming I was being sarcastic (without mandatory sarcasm tag!), but what I was really hoping for was more backstory like this. I guess it depends on how you read things in your head.
Off-topic, but I absolutely loathe new Flask logo. Old one[0] has this vintage, crafty feel. And the new one[1] looks like it was made by a starving high schooler experimenting with WordArt.
[0] - https://upload.wikimedia.org/wikipedia/commons/3/3c/Flask_lo...
[1] - https://flask.palletsprojects.com/en/stable/_images/flask-na...
1. Original logo has country charm and soul.
2. Replaced with a modern soulless logo.
3. Customer outrage!
4. Company (or open source project) comes to its senses and returns to old logo.
https://media.nbcboston.com/2025/08/cracker-barrel-split.jpg
(n.b. The Cracker Barrel Rebellion is sometimes associated with MAGA. I am very far from that, but I have to respect when people of any political stripe get something right.)
https://www.wsj.com/articles/bot-networks-are-helping-drag-c...
More real-world is that I know tons of friends/relatives in the South and I don't know of even ONE that liked the redesign.
https://fortune.com/2025/09/18/sardar-biglari-war-against-cr...
Bots aren't necessarily aimed to promote "glorious motherland" directly, there are probably hundreds of people on a payroll searching for easy, popular targets to wreak havoc.
It's more interesting to me how, without fail, a comment always pops in at the mention of Cracker Barrel to say "those were bots, fellow human."
I wonder how much this differs from the percentage for any trending topic on X?
They had both new and “classic” for a while co existing.
Fun fact: so did most focus groups and (I think?) blind taste tests when it was just presented as a new drink, but they tended to be horrified by the idea of it actually replacing classic Coke. The problem with that switch was mostly psychological / cultural, not chemical.
Also, Diet Coke, which remains quite popular, is still based on the New Coke formula except with the sweetener swapped out. The no-calorie version of classic Coke is Coke Zero. The Coca-Cola Company has been working to increase Coke Zero's popularity, and it is now much more popular than it used to be, but I think Diet Coke continues to be more popular than Coke Zero even now.
I also hate the new ones. And most of what modern design pumps out now days.
The old logo is classic and bespoke. I could recall it from memory. It's impressionable.
The new one looks like an unfunded 2005-era dorm room startup. XmlHttpRequests for sheep herders.
The old logo is much better.
Neither is the new one, because you have to be a madman to show this hideous thing anywhere.
Who the fuck cares? That never hurt flask from becoming a well beloved widely adopted tidy framework.
And it's trivial to "resize and present" the old log on "assets that aren't rectangular"...
>Flask isn't a country podunk restaurant
Yeah, apparently by the new logo it's a generic mall fast food chain restaurant for people with zero taste
Here's Sysco, generic mall fast food distributor:
https://www.youtube.com/watch?v=rXXQTzQXRFc
https://logos-world.net/wp-content/uploads/2024/01/Sysco-Log...
You're measuring it by irrelevant measures. This is like when all the terrible Western game devs criticised Elden Ring because it didn't have "good UX".
Thinking about hand-rolled web services, I usually imagine either a stealth alcoholic's metal flask or a mad scientist's Erlenmeyer flask.
Just like in the old one. That is not strange in the slightest, it is a very common feature of typefaces that the ascenders of lower case letters overshoot the height of uppercase. That is one of the ways to distinguish an uppercase i from a lower case L.
> And then the mark. I dont get any of this.
They look to be following the Material Design logo trend that was in fashion a while ago. Following trends in logo design is never a good idea, it makes them look outdated soon.
The accessibility of this material and also the broader python ecosystem is truly incredible. After reflecting on this recently, started finding ways to give back/donate/contribute.
one day of vibe coding
Edit: corrected typo in "typo".
That said, I wonder if GIL-less Python will one day enable GIL-less C FFI? That would be a big win that Python needs.
I'm pretty sure that is what freethreading is today? That is why it can't be enabled by default AFAIK, as several C FFI libs haven't gone "GIL-less" yet.
What do you mean exactly? C FFI has always been able to release the GIL manually.
Python introduce another breaking change than also randomly affects performance, making it worse for large classes of users?
Why would the Python organisers want to do that?
Pypy compatibility with cpython seems very minor in comparison https://pypy.org/compat.html
The amount of time it takes spent to write all the cffi stuff is the same amount it takes to write an executable in C and call it from python.
The only time cffi is useful is if you want to have that code be dynamic, which is a very niche use case.
It's worth noting that PyPy devs are in the loop, and their insights so far have been invaluable.
https://www.reddit.com/r/RedditDayOf/comments/7we430/donald_...
> [Donald Knuth] firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features
This is such a breath of fresh air in a world where everything is considered obsolete after like 3 years. Our industry has a disease, an insatiable hunger for newness over completeness or correctness.
There's no reason we can't be writing code that lasts 100 years. Code is just math. Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago? LOL dude get with the times, everyone uses Equately for their equations now. It was made by 3 interns at Facebook, so it's pretty much the new hotness." No, I don't think I will use "Equately", I think I'll stick to the tried-and-true idea that has been around for 3000 years.
Forget new versions of everything all the time. The people who can write code that doesn't need to change might be the only people who are really contributing to this industry.
Anyhoo, remarks like this are why the real ones use Typst now. TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.
Are you intentionally leaning into the exact caricature I'm referring to? "Real programmers only use Typstly, because it's the newest!". The website title for Typst when I Googled it literally says "The new foundation for documents". Its entire appeal is that it's new? Thank you for giving me such a perfect example of the symptom I'm talking about.
> TeX and family are stagnant, difficult to use, difficult to integrate into modern workflows, and not written in Rust.
You've listed two real issues (difficult to use, difficult to integrate), and two rooted firmly in recency bias (stagnant, not written in Rust). If you can find a typesetting library that is demonstrably better in the ways you care about, great! That is not an argument that TeX itself should change. Healthy competition is great! Addiction to change and newness is not.
> nobody uses infinitesimals for derivatives anymore, they all use limits now
My point is not that math never changes -- it should, and does. However, math does not simply rot over time, like code seems to (or at least we simply assume it does). Math does not age out. If a math technique becomes obsolete, it's only ever because it was replaced with something better. More often, it forks into multiple different techniques that are useful for different purposes. This is all wonderful, and we can celebrate when this happens in software engineering too.
I also think your example is a bit more about math pedagogy than research -- infinitesimals are absolutely used all the time in math research (see Nonstandard Analysis), but it's true that Calculus 1 courses have moved toward placing limits as the central idea.
Just in the same sense that CS does not age out. Most concepts stick, but I'm pretty sure you didn't go through Στοιχεία (The Elements) in its original version. I'm also pretty confident that most people out there that use many of the notion it holds and helped to spread never threw their eyes over a single copy of it in their native language.
This is like saying "you haven't read the source code of the first version of Linux". The only reason to do that would be for historical interest. There is still something timeless about it, and I absolutely did learn Euclid's postulates which he laid down in those books, all 5 of which are still foundational to most geometry calculations in the world today, and 4 of which are foundational to even non-Euclidean geometry. The Elements is a perfect example of math that has remained relevant and useful for thousands of years.
It might not be the best tagline, but that is most certainly not the entire appeal of Typst. It is a huge improvement over Latex in many ways.
All auto-differentiation libraries today are built off of infinitesimals via Dual numbers. Literally state of the art.
>There's no reason we can't be writing code that lasts 100 years.
There are many reason this is most likely not going to happen. Code despite best effort to achieve separation of concern (in the best case) is a highly contextual piece of work. Even with a simple program with no external library, there is a full compiler/interpreter ecosystem that forms a huge dependency. And hardware platforms they abstract from are also moving target. Change is the only constant, as we say.
>Imagine having this attitude with math: "LOL loser you still use polynomials!? Weren't those invented like thousands of years ago?
Well, that might surprise you, but no, they weren't. At least, they were not dealt with as they are thought and understood today in their contemporary most common presentation. When Babylonians (c. 2000 BCE) solved quadratic equation, they didn't have anything near Descartes algebraic notation connected to geometry, and there is a long series evolution in between, and still to this days.
Mathematicians actually do make a lot of fancy innovative things all the time. Some fundamentals stay stable over millennia, yes. But also some problem stay unsolved for millennia until some outrageous move is done out of the standard.
That's actually a good moment to wander about what an amazing they are, really.
Completely sidestepping any debate about the language design, ease of use, quality of the standard library, size of community, etc... one of its strengths these days is that standard code basically remains functional "indefinitely", since the standard is effectively frozen. Of course, this requires implementation support, but there are lots of actively maintained and even newer options popping up.
And because extensibility is baked into the standard, the language (or its usage) can "evolve" through libraries in a backwards compatible way, at least a little more so than many other languages (e.g. syntax and object system extension; notable example: Coalton).
Of course there are caveats (like true, performant async programming) and it seems to be a fairly polarizing language in both directions; "best thing since sliced bread!" and "how massively overrated and annoying to use!". But it seems to fit your description decently at least among the software I use or know of.
While writing "timeless" code is certainly an ideal of mine, it also competes with the ideals of writing useful code that does useful things for my employer or the goals of my hobby project, and I'm not sure "getting actual useful things done" is necessarily LISP's strong suit, although I'm sure I'm ruffling feathers by saying so. I like more modern programming languages for other reasons, but their propensity to make backward-incompatible changes is definitely a point of frustration for me. Languages improving in backward-compatible ways is generally a good thing; your code can still be relatively "timeless" in such an environment. Some languages walk this line better than others.
Another point for stability is about how much a runtime can achieve if it is constantly improved over decades. Look where SBCL, a low-headcount project, is these days.
We should be very vigilant and ask for every "innovation" whether it is truly one. I think it is fair to assume for every person working in this industry for decades that the opinion would be that most innovations are just fads, hype and resume-driven development - the rest could be as well implemented as a library on top of something existing. The most progress we've had was in tooling (rust, go) which does not require language changes per se.
I think, the frustrating part about modern stacks is not the overwhelming amount of novelty, it is just that it feels like useless churn and the solutions are still as mediocre or even worse that what we've had before.
In theory, yes. In practice, no, because code is not just math, it's math written in a language with an implementation designed to target specific computing hardware, and computing hardware keeps changing. You could have the complete source code of software written 70 years ago, and at best you would need to write new code to emulate the hardware, and at worst you're SOL.
Software will only stop rotting when hardware stops changing, forever. Programs that refuse to update to take advantage of new hardware are killed by programs that do.
Arguably modern operating systems are all sort of virtual machine emulators too. They emulate a virtual computer which has special instructions to open files, allocate memory, talk to the keyboard, make TCP connections, create threads and so on. This computer doesn't actually exist - its just "emulated" by the many services provided by modern operating systems. Thats why any windows program can run on any other windows computer, despite the hardware being totally different.
The real reason for software churn isn't hardware churn, but hardware expansion. It's well known that software expands to use all available hardware resources (or even more, according to Wirth's law).
It doesn't matter if x86 is backwards compatible if everything else has changed.
No code can last 100 years in any environment with change. That's the point.
We rewrite stuff for lots of reasons, but virtualization makes it easy enough to take our platforms with us even as hardware changes.
The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.
I never said it did; other ISAs have similar if not longer periods of backwards compatability (IBM's Z systems architecture is backwards compatible with the System/360 released in 1964).
> The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.
I never mentioned Windows but it's ridiculous to imply its backwards compatability is all on Microsoft. Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.
I never said that. Windows was just an easy example.
>Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.
- The shift from 16-bit to 32-bit protected mode with the Intel 80386 processor that fundamentally altered how the processor managed memory.
- Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.
- The shift to x86-64 that Microsoft had to compensate with emulation and WOW64
Any many more. That you think otherwise just shows all the effort that has been done.
I said x86 has "over 30 years of backwards compatibility". The 80386 was released in 1985, 40 years ago :)
> Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.
This is the only breaking change in x86 that I'm aware of and it's a rather light one as it only affected programs relying on an exactly 2^16 memory space. And, again, that was over 40 years ago!
> The shift to x86-64 that Microsoft had to compensate with emulation and WOW64
No, I don't think so. A x86-64 CPU starts in 32 bit mode and then has to enter 64 bit mode (I'd know, I spent many weekends getting that transition right for my toy OS). This 32 bit mode is absolutely backwards compatible AFAIK.
WOW64 is merely a part of Microsoft's OS design to allow 32 bit programs to do syscalls to a 64 bit kernel, as I understand it.
So pretty much none of the peripherals--including things like system memory and disk drives, do note--from a computer in 1995 can talk using any of the protocols a modern computer supports (save maybe a mouse and keyboard) and require compatibility adapters to connect, while also pretty much none of the software works without going through custom compatibility layers. And based on my experience trying to get a 31-year old Win16 application running on a modern computer, those compatibility layers have some issues.
"SATA" stands for "serial ATA", and has the same basic command set as the PATA from 1984 - bridge chips were widely used. And it all uses SCSI, which is also what USB Mass Storage Devices use. Or if you're feeling fancy, there's a whole SCSI-to-NVMe translation standard as well.
HDMI is fully compatible with single-land DVI-D, you can find passive adapters for a few bucks.
There's one port you forgot to mention: ethernet! A brand-new 10Gbps NIC will happily talk with an ancient 10Mbps NIC.
It might look different, but the pc world is filled with ancient technology remnants, and you can build some absolutely cursed adapter stacks. If anything, the limiting factor is Windows driver support.
You can always run code from any time with emulation, which gives the “math” the inputs it was made to handle.
Here’s a site with a ton of emulators that run in browser. You can accurately emulate some truly ancient stuff.
The backward-compat story is also oversold because, yes, baseline TeX is backward compatible, but I bet <0.1% of "TeX" document don't use some form of LaTeX and use any number of packages... which sometimes break at which point the stability of base TeX doesn't matter for actual users. It certainly helps for LaTeX package maintainers, but that doesn't matter to users.
Don't get me wrong, TeX was absolutely revolutionary and has been used for an insane amount of scientific publishing, but... it's not great code (for modern requirements) by any stretch.
Howeveeerrr.. its not quite math when you break down to the electronics level, unless you go really wild (wild meaning physics math). take a breakdown of python to assembly to binary that flips the transistors doing the thing. You can mathematically define that each transistor will be Y when that value of O is X(N); btw sorry i can't think of a better way to define such a thing from mobile here. And go further by defining voltages to be applied, when and where, all mathematically.
In reality its done in sections. At the electronic level math defines your frequency, voltage levels, timing, etc; at the assembly level, math defines what comparisons of values to be made or what address to shift a value to and how to determine your output; lastly your interpreter determines what assembly to use based on the operations you give it, and based on those assembly operations, ex an "if A == B then C" statement in code is actually a binary comparator that checks if the value at address A is the same as the value at address B.
You can get through a whole stack with math, but much of it has been abstracted away into easy building blocks that don't require solving a huge math equation in order to actually display something.
You can even find mathematical data among datasheets for electronic components. They say (for example) over period T you cant exceed V volts or W watts, or to trigger a high value you need voltage V for period T but it cannot exceed current I. You can define all of your components and operations as an equation, but i dont think its really done anymore as a practice, the complexity level of doing so (as someone not building a cpu or any ic) isnt useful unless youre working on a physics paper or quantum computing, etc etc
I understand some of your frustration, but often the newness is in response to a need for completeness or correctness. "As we've explored how to use the system, we've found some parts were missing/bad and would be better with [new thing]". That's certainly what's happening with Python.
It's like the Incompleteness Theorem, but applied to software systems.
It takes a strong will to say "no, the system is Done, warts and missing pieces and all. Deal With It". Everyone who's had to deal with TeX at any serious level can point to the downsides of that.
Math is continually updated, clarified and rewritten. 100 years ago was before the Bourbaki group.
Consider how vastly more accessible programming has become from 1950 until the present. Imagine if math had undergone a similar transition.
Mathematical notation evolved to its modern state over centuries. It's optimized heavily for its purpose. Version numbers? You're being facetious, right?
Yes, it evolved. It wasn't designed.
>Version numbers?
Without version numbers, it has to be backwards-compatible, making it difficult to remove cruft. What would programming be like if all the code you wrote needed to work as IBM mainframe assembly?
Tau is a good case study. Everyone seems to agree tau is better than pi. How much adoption has it seen? Is this what "heavy optimization" looks like?
It took hundreds of years for Arabic numerals to replace Roman numerals in Europe. A medieval mathematician could have truthfully said: "We've been using Roman numerals for hundreds of years; they work fine." That would've been stockholm syndrome. I get the same sense from your comment. Take a deep breath and watch this video: https://www.youtube.com/watch?v=KgzQuE1pR1w
>You're being facetious, right?
I'm being provocative. Not facetious. "Strong opinions, weakly held."
If there’s one thing that mathematical notation is NOT, it’s backwards compatible. Fields happily reuse symbols from other fields with slightly or even completely different meanings.
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo... has lots of examples, for example
÷ (division sign)
Widely used for denoting division in Anglophone countries, it is no longer in common use in mathematics and its use is "not recommended". In some countries, it can indicate subtraction.
~ (tilde)
1. Between two numbers, either it is used instead of ≈ to mean "approximatively equal", or it means "has the same order of magnitude as".
2. Denotes the asymptotic equivalence of two functions or sequences.
3. Often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes.
4. Standard notation for an equivalence relation.
5. In probability and statistics, may specify the probability distribution of a random variable. For example, X∼N(0,1) means that the distribution of the random variable X is standard normal.
6. Notation for proportionality. See also ∝ for a less ambiguous symbol.
Individual mathematicians even are known to have broken backwards compatibility. https://en.wikipedia.org/wiki/History_of_mathematical_notati...
* Euler used i to represent the square root of negative one (√-1) although he earlier used it as an infinite number*
Even simple definitions have changed over time, for example:
- how numbers are written
- is zero a number?
- is one a number?
- is one a prime number?
Most of my python from that era also works (python 3.1)
The problem is not really the language syntax, but how libraries change a lot.
Btw, equations and polynomials while conceptually are old, our contemporary notation is much younger, dating to the 16th century, and many aspects of mathematical notation are younger still.
Even C/C++ introduces breaking changes from time to time (after decades of deprecation though).
There’s no practical reason why Python should commit to a 100+ year code stability, as all that comes at a price.
Having said that, Python 2 -> 3 is a textbook example of how not to do these things.
C in contrast generally versions the breaking changes in the standard, and you can keep targeting an older standard on a newer compiler if you need to, and many do
But you have a point, and it's not just "our industry", it's society at large that has abandoned the old in favour of incessant forgetfulness and distaste for tradition and history. I'm by no means a nostalgic but I still mourn the harsh disjoint between contemporary human discourse and historical. Some nerds still read Homer and Cicero and Goethe and Ovid and so on but if you use a trope from any of those that would have been easily recognisable as such by europeans for much of the last millenium you can be quite sure that it won't generally be recognised today.
This also means that a lot of early and mid-modern literature is partially unavailable to contemporary people, because it was traditional to implicitly use much older motifs and riff on them when writing novels and making arguments, and unless you're aware of that older material you'll miss out on it. For e.g. Don Quixote most would need an annotated version which points out and makes explicit all the references and riffing, basically destroying the jokes by explaining them upfront.
Not sure this is the best example. Mathematical notation evolved a lot in the last thousand years. We're not using roman numerals anymore, and the invention of 0 or of the equal sign were incredible new features.
Rust is indeed quite fast, I thought NodeJS was much better tbh., although it's not bad. I'd be interested to learn what's holding it back because I've seen many implementations where V8 can get C++-like performance (I mean it's C++ after all). Perhaps there's a lot of overhead in creating/destroying temporary objects.
Edit: fixed a couple of typos.
I don’t think that follows. Python is written in C, but that doesn’t mean it can get C-like performance. The sticking point is in how much work the runtime has to do for each chunk of code it has to execute.
(Edit: sorry, that’s in reply to another child content. I’m on my phone in a commute and tapped the wrong reply button.)
If you are writing that sort of code, then it does apply; the speed for that code is real. It's just that the performance is much more specific than people think it is. In general V8 tends to come in around the 10x-slower-than-C for general code, which means that in general it's a very fast scripting language, but in the landscape of programming languages as a whole that's middling single-thread performance and a generally bad multiprocessing story.
In dev cycles most code is short-running.
That said, of all the reasons stated here, it's why I don't primarily use PyPy (lots of libraries still missing)
Python libraries used to brag about being pure Python and backwards compatible, but during the push to get everyone on 3.x that went away, and I think it is a shame.
The project is moving into maintenance mode, if some folks want to get python-famous, go support pypy.
For public projects I default the shebang to use `env python3` but with a comment on the next line that people can use if they have pypy. People seem to rarely have it installed but they always have Python3 (often already shipped with the OS, but otherwise manually installed). I don't get it. Just a popularity / brand awareness thing I guess?
In most cases where you do care about CPU performance, you're using numpy or scikit learn or pandas or pytorch or tensorflow or nltk or some other Python library that's more or just a wrapper around fast C, C++ or Fortran code. The performance of the interpreter almost doesn't matter for these use cases.
Also, those native libraries are a hassle to get to work with PyPy in my experience. So if any part of your program uses those libraries, it's way easier to just use CPython.
There are cases where the Python interpreter's bad performance does matter and where PyPy is a practical choice, and PyPy is absolutely excellent in those cases. They just sadly aren't common and convenient enough for PyPy to be that popular. (Though it's still not exactly unpopular.)
Also: there are some libraries that just don't work on pypy.
With PyPy not so much.
I say this because I think the teams working on free-threaded and JIT python maybe could have done a better job publicly setting expectations.
[0] Github slide deck https://github.com/faster-cpython/ideas/blob/main/FasterCPyt...
Didn't help Microsoft axed several folks on that team too...
Only after the four-year period was over, during which they only delivered a 1.5x - 2x speedup instead of the projected 5x.
(mini unrelated rant. I think pi should equal 6.28 and tau should equal 3.14, because pi looks like two taus)
Ha. Undeniable proof that we had them backwards all along!
> Framework laptop running Ubuntu Linux 24.04 (Intel Core i5 CPU)
> Mac laptop running macOS Sequoia (M2 CPU)
For fun, I tried this in Raku:
(0, 1, *+* ... *)[40] #0.10s user 0.03s system 63% cpu 0.214 total
lolSeriously, Python is doing great stuff to squeeze out performance from a scripting language. Realistically, Raku has fewer native libraries (although there is Inline::Python) and the compiler still has a lot of work to get the same degree of optimisation (although one day it could compare).
EDIT: for those who have commented, yes you are correct … this is a “cheat” and does not seek to state that Raku is faster than Python - as I said Raku still has a lot of work to do to catch up.
Do you have the same hardware as the author or should one of you run the other's variant to make this directly comparable?
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a+b
And taking the 40th element. It's not comparable at all to the benchmark, that's deliberately an extremely slow method of calculating fibonacci numbers for the purpose of the benchmark. For this version, it's so fast that the time is dominated by the time needed to start up and tear down the interpreter. $ python -m timeit "x = (1, 0); [x[0] for _ in range(40) if (x := (x[0] + x[1], x[0]))][-1]"
50000 loops, best of 5: 4 usec per loop
(Or a "lazy iterator" approach:) $ python -m timeit --setup 'from itertools import islice, count' 'x = (1, 0); next(islice((x[0] for _ in count() if (x := (x[0] + x[1], x[0]))), 40, None))'
50000 loops, best of 5: 5.26 usec per loop
so for majority of us folks use what you love - the performance will come.
When doing bioinformatics we had someone update/rewrite a tool in java and it was so much faster. Went from a couple days to some like 4 hours of runtime.
Python certainly can be used in production (my experience maintaining some web applications in Java would make me reach for python/php/ruby to create a web backend speed be dammed). Python has some great libraries.
Reasons why Sun and Java failed:
Strategy over product. McNealy cast Java as a weapon of mass destruction to fight Microsoft, urging developers to "evangelize Java to fight Microsoft." That fight-first framing made anti-Microsoft positioning the goal line, not developer throughput.
Purity over pragmatism. Sun’s "100% Pure Java" program explicitly banned native methods and dependencies outside the core APIs. In practice, that discouraged bridges to real-world stacks and punished teams that needed COM/OS integration to ship. (Rule 1: "Use no native methods.")
"100% Pure Java" has got to be one of the worst marketing slogans in the history of programming languages, signaling absolutism, exclusion, and gatekeeping. And it was technically just as terrible and destructive an idea that held Java back from its potential as an inclusive integration, extension, and scripting language (especially in the web browser context, since it was so difficult to integrate, that JavaScript happened instead and in spite of Java).
Lua, Python, and even TCL were so much better and successful at embedding and extending applications than Java ever was (or still is), largely because they EMBRACED integration and REJECTED "purity".
Java's extremist ideological quest for 100% purity made it less inclusive and resilient than "mongrel" languages and frameworks like Lua, Python, TCL, SWIG, and Microsoft COM (which Mozilla even cloned as "XP/COM"), that all purposefully enabled easy miscegenation with existing platforms and libraries and APIs instead of insanely insisting everyone in the world rewrite all their code in "100% Pure Java".
That horrible historically troubling slogan was not just a terrible idea technically and pragmatically, but it it also evoked U.S. nativist/KKK's "100% Americanism", Nazi's "rassische Reinheit", "Reinhaltung des Blutes", and "Rassenhygiene", Fascist Italy's "La Difesa della Razza", and white supremacist's "white purity". It's no wonder Scott McNealy is such a huge Trump supporter!
While Microsoft courted integrators. Redmond pushed J/Direct / Java-COM paths, signaling "use Windows features from Java if that helps you deliver." That practicality siphoned off devs who valued getting stuff done over ideological portability.
Community as militia. The rhetoric ("fight," "evangelize") enlisted developers as a political army to defend portability, instead of equipping them with first-rate tooling and sanctioned interop. The result: cultural gatekeeping around "purity" rather than unblocking use cases.
Ecosystem costs. Tooling leadership slid to IBM’s aptly named Eclipse (a ~$40M code drop that became the default IDE), while Sun’s own tools never matched Eclipse’s pull: classic opportunity cost of campaigning instead of productizing.
IBM's Eclipse cast a dark shadow over Sun's "shining" IDE efforts, which could not even hold a candle to Microsoft's Visual Studio IDE that Sun reflexively criticized so much without actually bothering to use and understand the enemy.
At least Microsoft and IBM had the humility to use and learn from their competitor's tools, in the pursuit of improving their own. Sun just proudly banned them from the building, cock-sure there was nothing to learn from them. And now we are all using polyglot VSCode and Cursor, thanks to Microsoft, instead of anything "100% Pure" from Sun!
Litigation drain. Years of legal trench warfare (1997 suit and 2001 settlement; then the 2004 $1.6B peace deal) defended "100% Pure Java" but soaked time, money, and mindshare that could have gone to developer-facing capabilities.
Optics that aged poorly. The very language of "purity" in "100% Pure Java" read as ideological and exclusionary to many -- whatever Sun's presumed intent -- especially when it meant "rewrite in Java, don’t integrate." The cookbook literally codified "no native methods," "no external libraries," and even flagged Runtime.exec as generally impure.
McNealy’s self-aggrandizing war posture did promote Java’s cross-platform ideal, but it de-prioritized developer pragmatism -- stigmatizing interop, slow-rolling mixed-language workflows, and ceding tools leadership -- while burning years on lawsuits. If your priority was "ship value fast," Sun’s purity line often put you on the wrong side of the border wall.
And now finally, all of Java's remaining technical, ideological, and entrenched legacy enterprise advantages don't matter any more, alas, because they are all overshadowed by the unanthropomorphizable lawnmower that now owns it and drives it towards the singular goal of extracting as much profit from it as possible.
Go, Kotlin and Rust are just significantly more modern and better designed, incorporating the lessons from 90s languages like Python, Ruby and Java.
I couldn't find any note of it, so I would assume not.
It would be interesting to see how the tail call interpreter compares to the other variants.
fib(n-1) + fib(n-2)
isn’t a tail call—there’s work left after the recursive calls, so the tail call interpreter can’t optimize it.And numpy is a) written in C, not Python, and b) is not part of Python, so it hasn't changed when 3.14 was released. The goal was to evaluate the Python 3.14 interpreter. Not to say that it wouldn't be interesting to evaluate the performance of other things as well, but that is not what I set out to do here.
Fundamentally for example, if you're doing some operations on numpy arrays like: c = a + b * c, interpreted numpy will be slower than compiled numba or C++ just because an eager interpreter will never fuse those operations into an FMA.
But then I think in some ways it's a much more accurate depiction of my use case. I mainly write monte-carlo simulations or simple scientific calculations for a diverse set of problems every day. And I'm not going to write a fast algorithm or use an unfamiliar library for a one-off simulation, even if the sim is going to take 10 minutes to run (yes I use scipy and numpy, but often those aren't the bottlenecks). This is for the sake of simplicity as I might iterate over the assumptions a few times, and optimized algorithms or library impls are not as trivial to work on or modify on the go. My code often looks super ugly, and is as laughably unoptimized as the bubble sort or fib(40) examples (tail calls and nested for loops). And then if I really need the speed I will take my time to write some clean cpp with zmq or pybind or numba.
If your actual load is 1% python and 99% offloaded, the effect of a faster python might not mater a lot to you, but to measure python you kinda have to look at python
Interestingly I might have only ever used the time (shell) builtin command. GNU's time measuring command prints a bunch of other performance stats as well.
Quoting overrides aliases and builtins.
$ 'time' -v -- echo hi
hi
Command being timed: "echo hi"
[...]
And that it gets into spitting range of standard so fast is really promising too. That hopefully means the part not compatible with it get flushed out soon-ish
The big issue is what about all those C extension modules, some of them might require a lot of changes to work properly in a no-GIL world.
What about Lua and LuaJIT
Years ago, I even found Ruby to be faster than Python. This was back in the Ruby 2.0 / Python 3.5 days - I'd be interested to know if it's still the case.
There are other languages you can use to make stuff go fast. Python isn't for making stuff go fast. Its for rapid dev, and that advantage matters way more when you already are going to be slow due to waiting for network response
It isn't.
Not only that, it is a lot easier to hack on. I might be biased, but the whole implementstion idea of PyPy seems a lot more sane.
This is the problem!
You have answered your own question.
Seriously, though. PyPy is 2-3 versions behind CPython (3.11 vs 3.14) and it's not even 100% compatible with 3.11. Libraries such as psycopg and lxml are not fully supported. It's a hard sell.
So why not move all the resources from CPython to close the gap with features faster and replace CPython entirely?
Since this is not happening I expect there to be serious reasons, but I fail to see them. This is what I ask for.
Yes.
First is startup time. REPL cycle being fast is a big advantage for development. From a business perspective, dev time is more expensive then compute time by orders of magnitude. Every time you make a change, you have to recompile the program. Meanwhile with regular python, you can literally develop during execution.
Second is compatibility. Numpy and pytorch are ever evolving, and those are written a C extensions.
Third is LLMs. If you really want speed, Gemma27bqat that runs on a single 3090 can translate python codebase into C/C++ pretty easily. No need to have any additional execution layer. My friend at Amazon pretty much writes Java code this way - prototypes a bunch of stuff in Python, and then has an LLM write the java code thats compatible with existing intra-amazon java templates.
If you for some reason do this, please keep the python around so I can at least look at whatever the human was aiming at. It's probably also wrong as they picked this workflow, but there's a chance it has something useful
However, if the output quality is crap, then well, maybe his creativity should not be rewarded. I've seem hefty amount of Map<Object, Object> in Java, written primarily by JS developers.
Any kind of code generation that proves incredibly productivity in the writing of software is kind of like saying you have a lot of money by maxing out your creditcard. Maybe you can pay it back, maybe you can't. The fact that there is no mention of future debt is exactly the kind of thing that old men get suspicious about.
I'm not saying the old men are correct. I'm just pointing out the reason for the yelling.
If the result is great and maintainable code, great. I imagine it won't be, as no one has actually understood it even once.
170M python-3.6.15
183M python-3.7.17
197M python-3.8.20
206M python-3.9.24
218M python-3.10.19
331M python-3.11.14
362M python-3.12.12
377M python-3.13.8
406M python-3.14.0
Python 3.11 on Debian is around 21 MB installed size (python3.11-minimal + libpython3.11-minimal + libpython3.11-stdlib), not counting common shared dependencies like libc, ncurses, liblzma, libsqlite3, etc.
Looking at the embeddable distribution for Windows (32-bit), Python 3.11 is 17.5 MB unpacked, 3.13 is slightly smaller at 17.2 MB and 3.14 is 18.4 MB (and adds the _zstd and _remote_debugging modules).
See `docker run -it --rm -w /store ghcr.io/spack/all-pythons:2025-10-10`.
To be fair, the main contributors are tests and the static library.
Just looking at libpython.so
10M libpython3.6m.so.1.0
11M libpython3.7m.so.1.0
13M libpython3.8.so.1.0
14M libpython3.9.so.1.0
17M libpython3.10.so.1.0
24M libpython3.11.so.1.0
30M libpython3.12.so.1.0
30M libpython3.13.so.1.0
34M libpython3.14.so.1.0
The static library is likely large because of `--with-optimizations` enabling LTO (so smaller shared libs, but larger static libs).It's problem is, IMO, compatibility. Long ago I wanted to run it on yocto but something or other didn't work. I think this problem is gradually disappearing but it could be solved far more rapidly with a bit of money and effort probably.
However, the JIT does make things much faster
lumpa•21h ago