People typically live only once, so I want to make the best use out of my time. Thus I would prefer to write (prototype) in ruby or python, before considering moving to a faster language (but often it is not worth it; at home, if a java executable takes 0.2 seconds to delete 1000 files and the ruby script takes 2.3 seconds, I really don't care, even more so as I may be multitasking and having tons of tabs open in KDE konsole anyway, but for a company doing business, speed may matter much more).
It is a great skill to be able to maximize for speed. Ideally I'd love to have that in the same language. I haven't found one that really manages to bridge the "scripting" world with the compiled world. Every time someone tries it, the language is just awful in its design. I am beginning to think it is just not possible.
But why not simply write the code that needs to be fast in C and then use call it from Ruby?
But there are stuff, you immediately know you want a program, but they’re likely to be related to stuff like pure algorithms, protocols and binary file formats
Because often that's a can of worms and because people are not as good with C as they think they are, as evidenced by plenty of CVEs and the famous example of quadratic performance degradation in parsing a JSON file when the GTA V game starts -- something that a fan had to decompile and fix themselves.
For scripting these days I tend to author small Golang programs if bash gets unwieldy (which it quickly does; get to 100+ lines of script and you start hitting plenty of annoyances). Seems to work really well, plus Golang has community libraries that emulate various UNIX tools and I found them quite adequate.
But back to the previous comments, IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time. I do care in some of my workflows, hence I made scripts that pipe various Golang / Rust programs to empower my flow. But again, for many tasks this is not needed.
I actually hadn't heard this story. Is the gamedev world still averse to using proper modern C++ with basic tools like std::map and std::vector (never mind whatever else they've added since C++11 or so when I stopped paying attention)? Or else how exactly did they manage to mess it up? (And how big are the JSON files that this really matters, anyway?)
> IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time.
`time bash -c ''` is only a couple of milliseconds on my system. The slow thing about a bash script, usually, is how much of the work you do by shelling out to other commands.
There were HN discussions at the time as well.
In this case I believe they should have just vendored a small JSON parser in the game. It's a big blob of .exe and .dll files, why not? Found it weird but then again, we have all made mistakes that we later cringed at.
Meanwhile I got fired because I could not keep up with impossible demands (which has been acknowledged by multiple people in the company but they did not want to burn political capital by going against the decision maker who was a petty tyrant who could not handle disagreement and fired me the first time I said what he was doing was not okay).
Anyway, I got a bit bitter. :)
But in general, to me that story is embarrassing.
From the things that have been coming out since YJIT has been in development and the core team members have been showing, that's not necessary. Methods that are written in pure ruby outperform C libraries called from Ruby due a variety of factors.
What is your definition of "everything"? It seems to not include computation on a thing known as a computer.
Yes, thank you! Worth emulating.
By comparison:
> A characteristic of these systems spanning so many orders of magnitude is that it is very frequently the case that one of the things your system will be doing is in fact head-and-shoulders completely above everything else your system should be doing, and if you have a good sense of your rough orders of magnitudes from experience, it should be generally obvious to you where you need to focus at least a bit of thought about optimization, and where you can neglect it until it becomes an actual problem.
If you hit a button that's supposed to do something (e.g. "send email" or "commit changes") and the page loads too fast, say in 20ms, a lot of users panic because they think something is broken.
So if the dialog closes in 20ms if likely means the message was queued internally by the email client and then I would be worried that the queue will not be processed for whatever reason.
The file copy dialog in modern windows versions also has (had) this weird disconnect between the progress it's reporting and what it's actually doing. Seems very clear one thread is copying and one is updating the UI, and the communication between the two seems oddly delayed and inaccurate.
The progress reporting is very bizarre and sometimes the copying doesn't seem to start immediately. It feels markedly flakey.
For example, having a faster-spinning progress wheel makes users feel like the task is completed faster even if the elapsed time is the same.
I disagree with that as the choice of framework doesn't impact just the request/response lifecycle but is crucial to the overall efficiency of the system because they lead the user down a more or less performant path. Frameworks are not just HTTP servers.
Choosing a web framework also marries you to a language, hence the upper ceiling of your application will be tied to how performant that language is. Taking the article's example, as your application grows and more and more code is in the hot path you can very easily get into a space where your requests that took 50ms now take 500ms.
Though there are a lot of unfortunate "truths" in Java programming that seems to encourage malignant abstraction growth, such as "abstractions are free", and "C2 will optimize that for you". It's technically kinda mostly true, but you write better code if you think polymorphism is terribly slow like the C++ guys tend to do.
When functions are first-class objects, so much of the GoF book just dissipates into the ether. You don't have a "Command pattern", you just have a function that you pass as an argument. You don't have a "Factory pattern", you just have a function that you call to create an instance of some other thing. (And you don't even think about whether your "factory" is "abstract".) And so on and so forth. There is value in naming what you're doing, but that kind of formalism is an architectural smell — a warning that what you're doing could be more obvious, to the point where you don't notice doing it for long enough to think of a name.
And Mel thinks the same thing about opcodes.
Beware.
That is not the benchmarks game website.
Here's the current benchmarks game website —https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Here are some startup warmup measurements —
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
This sort of complicated analysis doubles as another example of the difficulty of context-free "fast" and "slow" labels. Is Go "fast"? For a general programming language, yes, though not the fastest. If you reserve "fast" for C/C++/Rust, then no it is not fast. Is it fast compared to Python, though? Yes, it'll knock your socks off if you're a Python programmer even with just a single thread, let alone what you can do if you can get some useful parallel processing going.
Be specific.
Ask: Faster than what … to do what?
I hate having to mull over the pros and cons of Rust for the 89th time when I know that if I make a service in Golang I'll be called in 3 months to optimise it. But multiple times now I have swallowed the higher complexity and initial slow building curve of Rust just so I don't have to go debug the mess that a few juniors left while trying to be clever in a Golang codebase.
In other contexts I'm a huge proponent of validating that your language is fast enough. There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution. Exceptions include "we were a startup at the time and experienced rather substantial growth" and the rare cases where technology X is just that much faster for some particular reason... though probably not being a "scripting language" as nowadays I'm not particularly convinced they're all that much faster to develop with past the first week, but something more like "X had a high-level but slow library that did 90% of what we needed but when we really, really needed that last 10% we had no choice but to spend person-years more time getting that last 10% of functionality, so we went with Y anyhow for the speed".
> There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution.
The language X was probably a good solution at first. Then the company started to increase its product surface or acquired enterprise customers. Now you have new workloads that were not considered and the language is no longer suited for it.
Most likely a decision is made to not introduce a second language to the company just for these new workloads as that complicates the architecture, not to mention hiring and managing, so you stay with language X and try to make do. Now you have language X doing more than it is suited for and response times often increase due to that.
This isn't really a case of startup growing pains, just that software itself cannot know ahead of time every application it'll have. You can choose a "mostly fast for all use cases" language and bet that your applications will fit those general use cases, this means you win small but also lose small.
I would contest even that. Most of the time it's a fight or flight response by the devs, meaning that they just go with whatever they are most comfortable with.
In the previous years I made good money from switching companies away from Python and Rails, to Elixir and Golang. The gains were massive and maintainability also improved a lot.
Of course, this is not advocacy for these technologies in particular. Use the right tool for the job is a popular adage for good reasons. But my point is: people don't use the right tool for the job as often as many believe. Mostly it's gut feelings and familiarity.
And btw I am not trashing on overworked CTOs opting for Python because they never had the time to learn better web tech. I get their pain and sympathise with it a lot. But the failing of most startup CTOs that I observed was that they fell into control mania and micro-management as opposed to learning to delegate and trust. Sadly that too is a function of being busy and overworked so... I don't know. I feel for them but I still want better informed tech decisions being made. At my age and experience I am extremely tired and weary of seeing people make all the same mistakes every time.
It’s just a better fit when you’re not quite sure what you’re building. You just gain more on the 99% of projects that never go anywhere than you lose on the one that you end up trying to turn into a real product. So calling them better web tech assumes a lot about the development process that isn’t guaranteed.
As said though, I don't judge people who go by familiarity. But one should keep an open mind, and learning some of the modern languages (like Elixir) is much less work than many believe.
A better web tech in this case refers to having the potential to scale far above what Python can offer + have a very good developer experience. To me those two are paramount.
It's also true that many projects will never hit that point. For those Python is just fine. But I prefer to cover my bases in the last years, and have not been disappointed by any of the 3 PLs above.
RE: your edit, Elixir's REPL allows modifying the app in-place but I have not worked with Python in a long time and it might have that as well. Can't remember. Also you can temporarily change an app in production which made fixing certain elusive bugs almost trivial, many times. As much as I love Golang and Rust they got nothing on Elixir's ability to fix your app literally in real time. Then when you are confident in the fix, you make the actual code change, merge it and deploy.
Sure Elixir is fine to work with, the thing is in the back of my mind I’m thinking it’s way more likely to end up embedding Python libraries in Elixir code than the reverse. It’s those little bits of friction that I’m avoiding because the start of a project is play. Soon enough the perfectionist side of me may get involved, until then the goal is to maximize fun so I actually start.
Anyway I get your standpoint and even somewhat agree, it just doesn’t work for me.
In 1999 I'd agree with you completely, but static languages have gotten a lot better.
There are some cases where libraries may entirely dominate such a discussion, e.g., if you know Ruby on Rails and you have a problem right up its alley then that may blow away anything you can get in a static language. But for general tasks I find static languages get an advantage even in prototyping pretty quickly nowadays. What extra you pay generally comes back many times over in the compiler's instant feedback.
And for context, I would have called Python my favorite language from about 2000 to probably 2015 or so. This preference wasn't from lack of familiarity. Heck, I don't even regret it; if I had 2000 to 2015 to do all over again I'm not sure I'd do much differently. Static languages kind of sucked for a good long time.
> It is completely normal for web requests to need more than 5 milliseconds to run. If you’re in a still-normal range of needing 50 milliseconds to run, even these very slow frameworks are not going to be your problem.
Is that it apparently does make a huge difference. At least doing CRUD web stuff, my calibration for Scala (so pretty high-level functional code) is to expect under 1 ms of CPU-time per request. The only time I've ever seen something in the 50-100 ms range was when working with Laravel. 5 ms is what I'd expect as the total response time for e.g. a paginated API returning 100+ records with a bunch of optional filters.
I'm unlikely to get bottlenecked on well written and idiomatic code in a slower framework. But I'm much more likely to accidentally do something very inefficient in such a framework and then hit a bottleneck.
I also think the difference in ergonomics and abstraction are not that huge between "slow" and "fast" frameworks. I don't think ASP.NET Core for example is significantly less productive than the web frameworks in dynamic languages if you know it.
Even if you find a slow function that constitutes 20% of the runtime, and optimize the living hell out of it to cut out 20% of the execution time, guess what your program is now about 4.1% faster.
Pretty often you have a hot path that looks like a matmul routine that does X FMAs, a physics step that takes Y matmuls, a simulation that takes Z physics steps, an optimizer that does K simulations. As a result, estimating performance across 10 orders of magnitude is just adding the logs of 4 numbers, which pretty well works out as “Count up the digits in XYZK, don’t get to 10” which is perfectly manageable to intuit
This in a way highlights the knowledge gap that exists in American manufacturing. Physical parts are designed to work in terms of cycles, which can span both decades and milliseconds. Engines in particular need to work in terms of milliseconds and decades, but there are other vehicle parts such as airbags, pumps, and steering and suspension components that need to be designed for massive orders of magnitude.
But a lot of software engineering goes into building tools, libraries, frameworks, and systems, and even "application" code may be put to uses very distant from the originally envisioned one. And in these contexts, performance relative to the "speed of light" - the highest possible performance for a single operation - can be a very useful concept. Something "slow" that is 100x off the speed of light may be more than fast enough in some circumstances but a huge problem in others. Something "very fast" that is 1.01x the speed of light is very unlikely to be a big problem in any application. And this is true whether the speed of light for the operation in question is 1ns or 1min.
I honestly don't know if async makes this easier or harder. It makes it easier to write sections of code that may have to wait for several things. It seems to make it less likely to write code that will kick off several things that can then be acted on independently when they arrive.
IshKebab•2mo ago
Yes, because there's usually context. To use his cgo example, cgo is slow compared to C->C and Go->Go function calls.
kragen•2mo ago
Lio•2mo ago
In web-development arguing about Go-Go vs CGo-Go times is probably inconsequential.
kragen•2mo ago
Latency is not interchangeable with throughput because, if your single 8-core server needs to serve 200 HTTP requests per second, you need to spend less than 40 core-milliseconds per request on average, no matter whether the HTTP clients are 1ms away or 1000ms away.
chrisweekly•2mo ago
kragen•2mo ago
IshKebab•2mo ago
kragen•2mo ago
ncruces•2mo ago
jerf•2mo ago
For good, an example that perhaps I should lift into the essay itself is probably more useful than an explanation. A year or two or so back, there was some article about the details of CGo or something like that. In the comments there was someone who was being quite a jerk about how much faster and better Python's C integration was. This person made several comments and was doing the whole "reply to literally everyone who disagrees", insulting the Go designers, etc. until finally someone put together the obvious microbenchmark and lo, Go was something like 25% faster than Python. Not blazingly faster, but being faster at all rather wrecked the thesis. Nor would it particularly matter that "this was a microbenchmark and those don't prove anything" as clearly the belief was that CGo was something like an order of magnitude slower if not more so even a single microbenchmark where Go won was enough to prove the point.
While the being a jerk bit was uncalled for, I don't blame the poster for the original belief though. Go programmers refer to CGo as "slow". Python programmers refer to their C integration as "fast". It is a plainly obvious conclusion from such characterizations that the Python integration is faster than Go.
Only someone being far, far more careful with their uses of "fast" and "slow" than I am used to seeing in programming discussions would pick up on the mismatch in contexts there. As such, I don't think that's a particularly good context. People who use it do not seem to have a generally unified scale of "fast" and "slow" that is even internally consistent, but rather a mismash of relatively inconsistent time scales (and that's not "relatively inconsistent" as in "sort of inconsistent" but "inconsistent relative to each other" [1]), thus making "fast" and "slow" observably useless to compare between any of them.
For useful, I would submit to you that unless you are one of the rare exceptions that we read about with those occasional posts where someone digs down to the very guts of Windows to issue Microsoft a super precise bug report about how it is handling semaphores or something, no user has ever come up to you and said that your software is fast or slow because it uses CGo, or any equivalent statement in any other language. That's not an acceptance criterion of any program at a user level. It doesn't matter if "CGo is slow" if your program uses it twice. The default context you are alluding to is a very low level engineering consideration at most but not something that is on its own fast or slow.
A good definition of fast or slow comes from somewhere else, and maybe after a series of other engineering decisions may work its way down to the speed of CGo in that specific context. 99%+ of the time, the performance of the code will not get worked down to that level. We are blinded by the exceptions when it happens but the vast majority of the time it does not.
By this, I mean it is an engineering mistake, albeit a very common one, to obsess in this "default context" about whether this technology or that technology is fast or slow. Programmers do this all the time. It is a serious, potentially project-crashing error. You need to start in the contexts that matter and work your way down as needed, only rarely reaching this level of detail at all. As such, this "default context" should be discarded out of your mind; it really only causes you trouble and failure as you ignore the contexts that really matter.
[1]: Of the various changes to English as she is spoken over the last couple of centuries, one of my least favorite is how a wide variety of useful words that used to have distinct meanings are now just intensifiers.