To be clear, this seems like a cool project and I dont want to be too negative about it, but i just think this was an entirely foreseeable outcome, and the amount of people excited about this JIT project when it was announced shows how poorly a lot of people understand what goes into making a language fast.
Is this in the article? I don't see Python's semantics mentioned anywhere as a symptom (but I only skimmed).
> shows how poorly a lot of people understand what goes into making a language fast.
...I'm sorry but are you sure you're not one of these people? Some facts:
1. JS is just as dynamic and spaghetti as Python and I hope we're all aware that it has some of the best jits out there;
2. Conversely, C++ has many "optimizing compiler[s]" and they're not all magically great by virtue of compiling a statically typed, rigid language like C++.
Anything can change at any time in Smalltalk.
But you’re not wrong in general. Even for Python there’s PyPy, with a JIT ~3x faster than CPython.
Also to note that even in that regard, Java happens to be more dynamic that people think, while the syntax is C++ like, the platform semantics are more akin to Smalltalk/Objective-C, hence why a JIT with such a background was a great addition.
ifFalse: alternativeBlock
"Answer the value of alternativeBlock. Execution does not actually
reach here because the expression is compiled in-line."
^alternativeBlock value
people really don't know enough about this to be talking about it with such confidence...
There's lots of Python code out there that relies on not using slots. If you're making a JIT, you can't assume that all code is using slots.
It was not universal. But it was very common and at least plausibly a majority view, so this idea wasn't just some tiny minority view either.
I consider this idea falsified now, pending someone actually coming up with a JIT/compiler/whatever that achieves this goal. We've poured millions upon millions of dollars into the task and the scripting languages still are not as fast as C or static languages in general. These millions were not wasted; there were real speedups worth having, even if they are somewhat hard on RAM. But they have clearly plateaued well below "C speed" and there is currently no realistic chance of that happening anytime soon.
Some people still have not noticed that the idea has been falsified and I even occasionally run into someone who thinks Javascript actually is as fast as C in general usage. But it's not and it's not going to be.
Instead we got parentheses around print.
I think what always ends up failing here is that, as others have stated, they won't make breaking API changes, in particular those in charge of driving Python forward are extremely hesitant to break the C API for fear of losing packages that have make Python so popular.
I would imagine, if the leadership was willing to put in the elbow grease to help those key packages along the changes when they happen, they could do it, but I understand that its not always that simple
To be very pedantic, the problem is not that these are dynamic languages _per se_, but that they were designed with semantics unconcerned with performance. As such, retrofitting performance can be extremely challenging.
As a counterexample of fast and dynamic: https://julialang.org/ (of course, you pay the prize in other places)
I agree with your comment overall, though.
JITs are really only ideal for request-processing systems, in which a) memory is abundant b) the same code paths run over and over and over again, and c) good p99 latency is usually the bar.
In contrast, in user facing apps, you usually find that a) memory is constrained b) lots of code runs rarely or in some cases once (e.g. the whole start-up path) c) what would be considered good p99 latency for a server can translate to pretty bad levels of jank.
JITs can't do anything if the code you care about runs rarely and causes a frame skip every time you hit it, either because the JIT hasn't triggered yet due to too-few samples, or the generated code has been evicted from the JIT cache because you don't have memory to spare. And if you have code that needs to run fast _every_ time it runs, the easiest way to do that is to start with fast code already compiled and ready to execute.
We saw this play out when android moved from Dalvik (JIT) to ART (AoT compilation). Apple figured this out years earlier.
Of course it's not that there are no highly performant apps built on JIT runtimes. But it's a significant headwind.
(Most of the above applies equally to tracing GC, btw)
All those languages are just as dynamic as Python, more so given the dynamically loading of code with image systems, across network, with break into debugger/condition points and redo workflows.
Something else is going on.
Everyone knows Python is hard to optimize, that's why Mojo also gave up on generality. These claimed 20-30% speedups, apparently made by one of the chief liars who canceled Tim Peters, are not worth it. Please leave Python alone.
I don't remember the Faster CPython Team claiming JIT with a >50% speedup should have happened two years ago, can you provide a source?
I do remember Mark Shannon proposed an aggressive timeline for improving performance, but I don't remember him attributing it to a JIT, and also the Faster CPython Team didn't exist when that was proposed.
> apparently made by one of the chief liars who canceled Tim Peters
Tim Peters still regularly posts on DPO so calling him "cancelled" is a choice: https://discuss.python.org/u/tim.one/activity.
Also, I really can not think who you would be referring to as part of the Faster CPython Team, of which all the former members I am aware of largely stayed out of the discussions on DPO.
Seems like the development was funded by Shopify and they got a ~20% performance improvement. https://shopify.engineering/ruby-yjit-is-production-ready
A similar experience in the Python community is that Microsoft funded "Faster CPython" and they made Python 20-40% faster.
I may not be completely accurate on this because there's not a whole lot of information on how Python is doing their thing so...
The way (I believe) Python is doing it is to take code templates and stitching them together (copy & patch compilation) to create an executable chunk of code. If, for example, one were to take the py-bytecode and just stitch all the code chunks together all you can realistically expect to save is the instruction dispatch operations, which the compiler should make really fast anyway, which leaves you at parity with the interpreter since each code chunk is inherently independent so the compiler can't do its magic on the entire code chunk. Basically this is just inlining the bytecode operations.
To make a JIT compiler really excel you'd need to do something like take all the individual operations of each individual opcode and lower that to an IR and then optimize over the entire method using all the bells and whistles of modern compilers. As you can imagine this is a lot more work than 'hacking' the compiler into producing code fragments which can be patched together. Modern compilers are really good at these sorts of things and people have been trying to make the Python interpreter loop as efficient as possible for a long time so there's a big hurdle to overcome here.
I've (or more accurately, Claude) has been writing a bytecode VM and the dispatch loop is basically just a pointer dereference and a function call which is about as fast as you can get. Ok, theoretically, this is how it works as there's also a check to make sure the opcode is within range as the compiler part is still being worked on and it's good for debugging but foundationally this is how it works.
From what I've gleaned from the literature the real key to making something like copy & patch work is super-instructions. You take common patterns, like MULT+ADD, and mash them together so the C compiler can do its magic. This was maybe mentioned in the copy & patch paper or, perhaps, they only talked about specialization based on types, don't actually remember.
So, yeah, if you were just competing against a basic tree-walking interpreter then copy & patch would blow it out of the water but C compilers and the Python interpreter have both had million of people hours put into them so that's really tough competition.
I haven't checked, but I wouldn't be surprised if more Python versions contained breaking changes than not.
Aside from the `async` keyword (experience with which seems like it may have driven the design of "soft keywords" for `match` etc.), what do you have in mind that's a language feature as opposed to a standard library deprecation or removal?
Yes, the bytecode changes with every minor version, but that's part of their attempts to improve performance, not a hindrance.
Smalltalk, Self, Lisp, are highly dynamic, their JIT research are the genesis of modern JIT engines.
For some strange reason, Python community rather learns C, calls it "Python", instead of focusing why languages that are just as dynamic, have managed already a few decades ago.
It's easy to dismiss our efforts, but Ruby is just as dynamic if not more than Python. It's also a very difficult language to optimize. I think we could have done the same for Python. In fact the Python JIT people reached out to me when they were starting this project. They probably felt encouraged seeing our success. However they decided to ignore my advice and go with their own unproven approach.
This is probably going to be an unpopular take but building a good JIT compiler is hard and leadership matters. I started the YJIT project with 10+ years of JIT compiler experience and a team of skilled engineers, whereas AFAIK the Python JIT project was lead by a student. It was an uphill battle getting YJIT to work well at first. We needed grit and I pushed for a very data-driven approach so we could learn from our early failures and make informed decisions. Make of that what you will.
Yes Python is hard to optimize. I Still believe that a good JIT for CPython is very possible but it needs to be done right. Hire me if you want that done :)
Several talks about YJIT on YouTube for those who want to know more: https://youtu.be/X0JRhh8w_4I
At first I thought their solution was really elegant. I have an appreciation for their approach, and I could have been captivated myself to choose it. But at this point I think this is a sunk cost fallacy. The JIT is not close to providing significant improvements and no one in the faster cpython community seems to be able to call the shot that the foundational approach may not be able to give optimal results.
I either hope to be wrong or hope that faster cpython managment has a better vision for the JIT than I do.
- Most of the work has just been plumbing. Int/float unboxing, smarter register allocation, free-threaded safety land in 3.15+.
- Most JIT optimizations are currently off by default or only triggers after a few thousand hits, and skips any byte-codes that look risky (profiling hooks, rare ops, etc.).
I really recommend this talk with one of the Microsoft faster Cpython developers for more details, https://www.youtube.com/watch?v=abNY_RcO-BU
I think if I was being paid to make CPython faster I’d spend at least a year changing how objects work internally. The object model innards are simply too heavy as it stands. Therefore, eliminating the kinds of overheads that JITs eliminate (the opcode dispatch, mainly) won’t help since that isn’t the thing the CPU spends much time on when running CPython (or so I would bet).
ggm•2d ago
At this point it's a great didactic tool and a passion project surely? Or, has advantages in other dimensions like runtime size, debugging, and .pyc coverage, or in thread safe code or ...
teruakohatu•4h ago
Unoptimised jit < optimised interpreter (at least in this instance)
They are working on it presumably because they think there will eventually be a speed ups in general or at least for certain popular workloads.
taeric•4h ago
Still, to directly answer the first question, I would hope even if there wasn't obvious performance improvements immediately, if folks want to work on this, I see no reason not to explore it. If we are lucky, we find improvements we didn't expect.
adrian17•3h ago
My understanding is that the basic copy-and-patch approach without any other optimizations doesn’t actually give that much. The difference between an interpreter running opcodes A,B,C and a JIT emitting machine code for opcode sequence A,B,C is very little - the CPU running the code will execute roughly the same instructions for both, the only difference is that the jit avoids doing an op dispatch between each op - but that’s already not that expensive due to jump threading in the interpreter. Meanwhile the JIT adds an extra possible cost of more work if you ever need to jump from JIT back to fallback interpreter.
But what the JIT allows is to codegen machine code corresponding to more specialized ops that wouldn’t be that beneficial in the interpreter (as more and smaller ops make it much worse for icaches and branch predictors). For example standard CPython interpreter ops do very frequent refcount updates, while the JIT can relatively easily remove some sequences of refcount increments followed by immediate decrements in the next op.
Or maybe I misunderstood the question, then in other words: in principle copy-and-patch’s code generation is quite simple, and the true benefits come from the optimized opcode stream that you feed it that wouldn’t have been as good for the interpreter.
taeric•2h ago
That my intuition is wrong here doesn't shock me, I should add. It was still a surprise and it will get me to update my idea on what the interpreter is doing.
moregrist•3h ago
This will almost certainly outperform a straight translation to poorly optimized machine code.
Compilers are structured in conceptual (and sometimes distinct) layers. In a classic statically-typed language will only compile-time optimizations, the compiler front-end will parse the language into a abstract syntax tree (AST) via a parse tree or directly, and then convert the AST into the first of what may be several intermediate representations (IRs). This is where a lot of optimization is done.
Finally the last IR is lowered to assembly, which includes register allocation and some other (peephole) optimization techniques. This is separate from the IT manipulation so you don’t have to write separate optimizers for different architectures.
There are aspects of a tracing JIT compiler that are quite different, but it will still use IR layers to optimize and have architecture-dependent layers for generating machine code.
taeric•2h ago
MobiusHorizons•2h ago
I don't know if that's exactly how it works for this particular effort, but that would be my expectation.
pizlonator•28m ago
Adding more optimizations improves things from there.
But the point is, a JIT can be a speedup just because it isn’t an interpreter (it doesn’t dynamically dispatch ops).