Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
Ask the nice product owner to stop crushing me with their deadlines and I'll happily oblige.
It's not, because you multiply that 100% extra CPU time by all of an application's users and only then you come to the real extra cost.
And if you want to pick on "application", think of the widely used libraries and how much any non optimization costs when they get into everything...
Maybe to you.
Meanwhile plenty of people are living paycheck-to-paycheck and literally cannot afford a phone, let alone a new phone and computer every few years.
How new do you think the CPU in your bank ATM or car's ECU is?
https://www.eetimes.com/comparing-tech-used-for-apollo-artem...
One of the tradeoffs of radiation hardening is increased transistor size.
Cost-wise it also makes sense - it’s a specialized, certified and low-volume part.
And to be clear, I love power chips. I remain very bullish about the architecture. But as a taxpayer reading this shit just pisses me off. Pork-fat designed to look pro-humanity.
Citation needed
> much less hiring fucking IBM
It's an IBM designed processor, what are you talking about?!
Ha! What's special about rad-hard chips is that they're old designs. You need big geometries to survive cosmic rays, and new chips all have tiny geometries.
So there are two solutions:
1. Find a warehouse full of 20-year old chips.
2. Build a fab to produce 20-year old designs.
Both approaches are used, and both approaches are expensive. (Approach 1 is expensive because as you eventually run out of chips they become very, very valuable and you end up having to build a fab anyway.)
There's more to it than just big geometries but that's a major part of the solution.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
Can you watch H.265 videos? That's the one limitation I regularly hit on my computer (that I got for free from some company, is pretty old, but is otherwise good enough that I don't think I'll replace it until it breaks). I don't think I can play videos recorded on modern iPhones.
The chips in everyones pockets do a lot of compute and are relatively new though.
I still don't see how one can classify a smartphone as a general-purpose computing device, even though they have enough computing power as a laptop.
Some of the specific embedded systems (like the sensors that feed back into the main avionics systems) may still be using older CPUs if you squint, but it's more likely a modern version of those older designs.
Doom on the Amiga for example (many consider it the main factor for the Amiga demise). Optimization and 30 years and it finally arrived
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
All those situations you describe are also a choice made so that companies can make sales.
I'm not so sure they're that different though. I do think that in the end most boil down to the same problem: no emphasis or care about performance.
Picking a programming paradigm that all but incentivizes N+1 selects is stupid. An N+1 select is not an I/O problem, it's a design problem.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
That could end up being Electron (VS Code), though that would be a bit sad.
I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.
I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.
Today a website is easier. But just like there's a very large percentage of people doing a great many things from their phone instead of tying themselves to a full-blown personal computer, there will be an increasing number of people who send their agents off to get things done. In that scenario, the user interface is further up the stack than a browser, if there's a browser as typically understood in the stack at all.
Of course, that would be suicide for the industry. But I'm not sure investors see that.
Cost of cyberattacks globally[1]: O($trillions)
Cost of average data breach[2][3]: ~$4 million
Cost of lost developer productivity: unknown
We're really bad at measuring the secondary effects of our short-sightedness.
[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...
[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.
Established apps could be optimized to the hilt.
But they seldom are.
No, all users just want the few programs which they themselves need. The market is not one user, though. It's all of them.
Yes but it's a different 'few programs' than 99% of all other users, so we're back to square one.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
Edit: and probably writing backends in Python or Ruby or JavaScript.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.
What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?
Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.
Why not log them to a file and cron a script to upload the data? Even if the feature request is nonsensical, you can architect a solution that respect the platform's constraints. It's kinda like when people drag in React and Next.js just to have a static website.
Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
Why stop at C++? Is that what you happen to be comfortable with? Couldn't you create even faster software if you went down another level? Why don't you?
No and if you understood what makes software fast you would know that. Most software is allocating memory inside hot loops and taking that out is extremely easy and can easily be a 7x speedup. Looping through contiguous memory instead of chasing pointers through heap allocated variables is another 25x - 100x speed improvement at least. This is all after switching from a scripting language, which is about a 100x in itself if the language is python.
It isn't about the instructions it is about memory allocation and prefetching.
You are probably a lazy or inexperienced engineer if you choose to work in C++.
In fact, there are optimizations available at the silicon level that are not available in assembly.
You are probably a lazy or inexperienced engineer if you choose to work in assembly.
The only slow (local) software I know is llvm and cpp compilers
Other are pretty fast
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.
Digital Foundry has an excellent video about the issues:
https://www.youtube.com/watch?v=p0rCA1vpgSw
TL;DR: the 'ideal hardware' for the Oblivion Remaster doesn't exist, even if you get the best gaming rig money can buy.
This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.
Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.
In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.
I remember that UE4 also hyped a realtime GI solution which then was hardly used in realworld games because it had a too big performance hit.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
Life on an entry or even mid level windows laptop is a very different world.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
Same thing happens with UI & Website design. When the designers and front-end devs all have top-spec MacBooks, with 4k+ displays, they design to look good in that environment.
Then you ship to the rest of the world which are still for the most part on 16:9 1920x1080 (or god forbid, 1366x768), low spec windows laptops and the UI looks like shit and is borderline unstable.
Now I don't necessarily think things should be designed for the lowest common denominator, but at the very least we should be taking into consideration that the majority of users probably don't have super high end machines or displays. Even today you can buy a brand new "budget" windows laptop that'll come with 8GB of RAM, and a tiny 1920x1080 display, with poor color reproduction and crazy low brightness - and that's what the majority of people are using, if they are using a computer at all and not a phone or tablet.
If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").
But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
And FWIW this stuff is then cached. I hadn't clicked that setting area in a while (maybe the first time this boot?) and did get a brief gray box that then a second later populated with all the buttons and settings. Now every time I click it again it appears instantly.
And even if every information takes a bit to figure out, it doesn't excuse taking a second to even draw the UI. If checking bluetooth takes a second, then draw the button immediately but disable interaction and show a loading icon, and when you get the blutooth information update the button, and so on for everything else.
And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?
And if we bothered keeping all that in memory, and kept using the CPU cycles to make sure it was actually accurate and up to date on the click six hours later, wouldn't people then complain about how obviously bloated it was? How is this not a constant battle of being unable to appease any critics until we're back at the Win 3.1 state of things with no Bluetooth devices, no WiFi networks, no dynamic changing or audio devices, etc?
And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
Clearly better. Most of the buttons should also work instantly, most of the information should also be available instantly. The button layout is rendered instantly, so I can already figure out where I want to click without having to wait one second even if the button is not enabled yet, and by the time my mouse reaches it it will probably be enabled.
> And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.
I've never seen the volume slider in Windows 98 take one second to render. Not even the start menu, which is much more complex, and which in Windows 11 often takes a second, and search results also show up after a random amount of time and shuffle the results around a few times, leading to many misclicks.
And if you don't remember the volume slider taking several seconds to render on XP you must be much wealthier than me or have some extremely rose colored glasses. I play around with old hardware all the time and get frustrated with the unresponsiveness of old equipment with period accurate software, and had a lot of decent hardware (to me at least) in the 90s and 00s. I've definitely experienced lots of times of the start menu painting one entry after the other at launch, taking a second to roll out, seeking on disk for that third level menu in 98, etc.
Rose colored glasses, the lot of you. Go use an old 386 for a month. Tell me how much more productive you are after.
I have to stay connected to VPN to work, and if I see VPN is not connected I click to reconnect.
If the VPN button hasn't loaded you end up turning on Airplane mode. Ouch.
I see this all the time with people who have old computers.
“My computer is really fast. I have no need to upgrade”
I press cmd+tab and watch it take 5 seconds to switch to the next window.
That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.
Maybe if you're in a purely text console doing purely text things 100% in memory it can feel snappy. But the moment you do anything graphical or start working on large datasets its so incredibly slow.
I still remember trying to do photo editing on a Pentium II with a massive 64MB of RAM. Or trying to get decent resolutions scans off a scanner with a Pentium III and 128MB of RAM.
My older computers would completely lock up when given a large task to do, often for many seconds. Scanning an image would take over the whole machine for like a minute per page! Applying a filter to an image would lock up the machine for several seconds even for a much smaller image a much simpler filter. The computer cannot even play mp3's and have a responsive word processor, if you really want to listen to music while writing a paper you better have it pass through the audio from a CD, much less think about streaming it from some remote location and have a whole encrypted TCP stream and decompression.
These days I can have lots of large tasks running at the same time and still have more responsiveness.
I have fun playing around with retro hardware and old applications, but "fast" and "responsive" are not adjectives I'd use to describe them.
So the quality has gone backwards in the process of rewriting the app into the touch friendly style. A lot of core windows apps are like that.
Note that the windows file system is much slower than the linux etx4, I don't know about Mac filesystems.
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.
This is also a machine with a lot of corporate security tools sapping a lot of cycles.
( This might also be a "new outlook" vs "out outlook" thing? )
I don't doubt it's happening to you, but I've never experienced it. And I'm not exactly using bleeding edge hardware here. A several year old i5 and a Ryzen 3 3200U (a cheap 2019 processor in a cheap Walmart laptop).
Maybe your IT team has something scanning every email on open. I don't know what to tell you, but it's not the experience out of the box on any machine I've used.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.
You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.
Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.
My corporate vpn app is a disaster on so many levels, it’s an internally developed app as opposed to Okta or anything like that.
I would likewise say that your experience is not universal, and that in many circumstances the situation is much worse. My wife is running an i5 laptop from 2020 and her work intranet is a 60 second load time. Outlook startup and sync are measured in minutes including mailbox fetching. You can say this is all not the app developers fault, but the crunch that’s installed on her machine is slowing things down by 5 or 10x and that slowdown wouldn’t be a big deal if the apps had reasonable load times in the first place.
There's native apps just as, if not more, complicated than VSCode that open faster.
The real problem is electron. There's still good, performant native software out there. We've just settled on shipping a web browser with every app instead.
The problem is when you load it and then react and all its friends, and design your software for everything to be asynchronous and develop it on a 0 latency connection over localhost with a team of 70 people where nobody is holistically considering “how long does it take from clicking the button to doing the thing I want it to do”
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?
I don't use Slack, but I don't think anything takes 20 seconds for me. Maybe XCode, but I don't use it often enough to be annoyed.
It's a mix of better CPUs, better OS design (e.g. much less need for aggressive virus scanners), a faster filesystem, less corporate meddling, high end SSDs by default... a lot of things.
I have no corp antivirus or MDM on this machine, just windows 11 and windows defender.
> On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional
I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.
I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.
I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
... and so on.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
It's electron. Electron was a mistake.
Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.
A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.
If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960
Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.
The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
It takes effectively no effort to conduct such a study yourself. Just try re-encoding a video at different frame rates up to your monitor refresh rate. Or try looking at a monitor that has a higher refresh rate than the one you normally use.
Even your average movie captures at 24 hz. Again, very likely you've never actually just compared these things for yourself back to back, as I mentioned originally.
I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.
Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.
I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.
The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.
How long did it take the last time you had to use an HDD rather than SSD for your primary drive?
How long did it take the first time you got to use an SSD?
How long does it take today?
Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?
> Almost everything loads instantly on my 2021 MacBook
Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.
> 5 year old mobile CPUs load modern SPA web apps with no problems.
"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".
This should sound like clearing a very low bar, but it doesn't seem to.
I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.
I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.
The software desktop users have to put up with is slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.
Fairly sure that was OP's point.
For comparison: https://www.cpubenchmark.net/compare/1075vs5852/Intel-Pentiu...
That's about a 168x difference. That was from before Moores law started petering out.
For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.
It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.
You're going to have to cite a source for that.
Bounds checking is one mechanism that addresses memory safety vulnerabilities. According to MSFT and CISA[1], nearly 70% of CVEs are due to memory safety problems.
You're saying that we shouldn't solve one (very large) part of the (very large) problem because there are other parts of the problem that the solution wouldn't address?
[1] https://www.cisa.gov/news-events/news/urgent-need-memory-saf...
Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.
But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
Anyway that's a form of saying "I know by reasoning that none of these will be outside the bounds, so let's not check".
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
This is a theoretical argument. It depends on the compiler being able to see that’s what you’re doing and prove that there is no other mutation.
> abominations of C and C++
Sounds like you don’t understand the design choices that made this languages successful.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.
Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?
And how will I know if it succeeded or whether something silently failed?
> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong
I agree. And note this is an example of a scenario you can encounter in other forms.
> Do you have any examples at all? Or is this just speculation?
Yes. Java and python are not competitive for graphics and audio processing.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.
It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.
Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?
My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope
Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.
Robert Barton (of Burroughs 5000 fame) once referred to these people as “high priests of a low cult.”
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
https://news.ycombinator.com/item?id=43967208 https://threadreaderapp.com/thread/1922015999118680495.html
The ability to hire and have people be productive in a less complicated language expands the market for workers and lowers cost.
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
Of course innovation is always in bits and spurts.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
2007 is the year we did get video streaming services: https://en.wikipedia.org/wiki/BBC_iPlayer
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
And of course, Vim and Emacs were out long before that.
Netflix video streaming launched in 2007.
> * VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
I used VS2005 a little bit in the past few years, and I was surprised to see that it contains most of the features that I want from an IDE. Honestly, I wouldn't mind working on a C# project in VS2005 - both C# 2.0 and VS2005 were complete enough that they'd only be a mild annoyance compared to something more modern.
> partial autopilot features aside from 1970's cruise control
Radar cruise control was a fairly common option on mid-range to high-end cars by 2007. It's still not standard in all cars today (even though it _is_ standard on multiple economy brands). Lane departure warning was also available in several cars. I will hand it to you that L2 ADAS didn't really exist the way it does today though.
> Video streaming services
We watched a stream of the 1994 World Cup. There was a machine at MIT which forwarded the incoming video to an X display window
xhost +machine.mit.edu
and we could watch it from several states away. (The internet was so trusting in those days.)To be sure, it was only a couple of frames per second, but it was video, and an audience collected to watch it.
> EV Cars (that anyone wanted to buy)
People wanted to buy the General Motors EV1 in the 1990s. Quoting Wikipedia, "Despite favorable customer reception, GM believed that electric cars occupied an unprofitable niche of the automobile market. The company ultimately crushed most of the cars, and in 2001 GM terminated the EV1 program, disregarding protests from customers."
I know someone who managed to buy one. It was one of the few which had been sold rather than leased.
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
Single core performance has improved, but at a much slower rate than I experienced as a kid.
Over the last 10 years, we are something like 120% improvement in single core performance.
And, not for nothing, efficiency has become much more important. More CPU performance hasn't been a major driving factor vs having a laptop that runs for 12 hours. It's simply easier to add a bunch of cores and turn them all off (or slow them down) to gain power efficiency.
Not to say the performance story would be vastly different with more focus on performance over efficiency. But I'd say it does have an effect on design choices.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
It is rarely the case that high volume transaction processing facilities also need to deal with deeply complex transactions.
I can't think of many domains of business wherein each transaction is so compute intensive that waiting for I/O doesn't typically dominate.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
Linux on 10-15 year old laptops and it runs good. if you beef up RAM and SSD then actually really good.
So for everyday stuff we can and do run on older hardware.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
That an alternate tool might perform better is compatible with the claim that performance alone is never the only difference between software.
Podman might be faster than Docker, but since it's a different tool, migrating to it would involve figuring out any number of breakage in my toolchain that doesn't feel worth it to me since performance isn't the only thing that matters.
Another example is that I use oh-my-zsh which is adds weirdly long startup time to a shell session, but it lets me use plugins that add things like git status and kubectl context to my prompt instead of fiddling with that myself.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
In my experiences, companies can afford to care about good software if they have extreme demands (e.g. military, finance) or amortize over very long timeframes (e.g. privately owned). It's rare for consumer products to fall into either of these categories.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
It is easy to get information of features. It is hard to get information on reliability or security.
The result is worsened because vendors compete on features, therefore they all make the same trade off of more features for lower quality.
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
That’s irrelevant here, the fully featured product can also be fast. The overwhelming majority of software is slow because the company simply doesn’t care about efficiency. Google actively penalized slow websites and many companies still didn’t make it a priority.
So why is it so rarely the case? If it's so simple, why hasn't anyone recognized that Teams, Zoom, etc are all bloated and slow and made a hyper-optimized, feature-complete competitor, dominating the market?
Software costs money to build, and performance optimization doesn't come for free.
> The overwhelming majority of software is slow because the company simply doesn’t care about efficiency.
Don't care about efficiency at all, or don't consider it as important as other features and functionality?
Zoom’s got 7,412 employees a small team of say 7 employees could make a noticeable difference here and the investment wouldn’t disappear, it would help drive further profits.
> Don't care about efficiency at all
Doesn’t care beyond basic functionality. Obviously they care if something takes an hour to load, but rarely do you see considerations for people running on lower hardware than the kind of machines you see at a major software company etc.
What would those 7 engineers specifically be working on? How did you pick 7? What part of the infrastructure would they be working on, and what kind of performance gains, in which part of the system, would be the result of their work?
7 people was roughly chosen to be able to cover the relevant skills while also being a tiny fraction of the workforce. Such efforts run into diminishing returns, but the company is going to keep creating low hanging fruit.
Disagree, the main reason so many apps are using "slow" languages/frameworks is precisely that it allows them to develop way more features way quicker than more efficient and harder languages/frameworks.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
At least when talking about software that has any real world use case, and not development for developments sake.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
It bothered me the AI BS, but the price was good and the machine works fine.
Also Microsoft has educated now several generations to accept that software fails and crashes.
Because "all software is the same", customers may not appreciate good software when they're used to live with bad software.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
If it was just because it was cheap, we'd also see similar fraud from Mexican or Vietnamese sellers, but I don't really see that.
The issue is that you have to be able to distinguish a good mechanic from a bad mechanic cuz they all get to charge a lot because of the shortage. Same thing for plumbing, electrical, HVAC, etc etc etc
But I understand your point.
Why? Because they are on a different incentive structure: non-comissioned payments for employees. They buy OEM parts, give a good warranty, charge fair prices, and they are always busy.
If this computer fad goes away, I'm going to open my own Toyota-only auto shop, trying to emulate them. They have 30 years of lead time on my hypothetical business, but the point stands: when people discover that high quality in this market, they stick to it closely.
There are laws about what goes into a car, strict regulation. Software, not so much.
Until my boss can be prosecuted for selling untested bug ridden bad software that is what I am instructed to produce
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
let usEmployees = from x in Employees where x.Country == "US";
func byFemale(Query<Employees> q) =>
from x in q where x.Sex == "Female";
let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality. var set = new HashSet<Employee>();
Done. Don't need any fancy support for that. Or if you want to load from a database, using the repository pattern and Kotlin this time instead of Java: @JdbcRepository(dialect = ANSI) interface EmployeeQueries : CrudRepository<Employee, String> {
fun findByCountryAndGender(country: String, gender: String): List<Employee>
}
val femaleUSEmployees = employees.findByCountryAndGender("US", "Female")
That would turn into an efficient SQL query that does a WHERE ... AND ... clause. But you can also compose queries in a type safe way client side using something like jOOQ or Criteria API.But now you've hard-coded this selection, why can't the performance characteristics also be easily parameterized and combined, eg. insert is O(1), delete is O(log(n)), or by defining indexes in SQL which can be changed at any time at runtime? Or maybe the performance characteristics can be inferred from the types of queries run on a collection elsewhere in the code.
> That would turn into an efficient SQL query that does a WHERE ... AND ... clause.
For a database you have to manually construct, with a schema you have to manually and poorly to an object model match, using a library or framework you have to painstakingly select from how many options?
You're still stuck in this mentality that you have to assemble a set of distinct tools to get a viable development environment for most general purpose programming, which is not what I'm talking about. Imagine the relational model built-in to the language, where you could parametrically specify whether collections need certain efficient operations, whether collections need to be durable, or atomically updatable, etc.
There's a whole space of possible languages that have relational or other data models built-in that would eliminate a lot of problems we have with standard programming.
A language fully integrated with the relational model exists, that's PL/SQL and it's got features like classes and packages along with 'natural' SQL integration. You can do all the things you ask for: specify what operations on a collection need to be efficient (indexes), whether they're durable (temporary tables), atomically updatable (LOCK TABLE IN EXCLUSIVE MODE) and so on. It even has a visual GUI builder (APEX). And people do build whole apps in it.
Obviously, this approach is not universal. There are downsides. One can imagine a next-gen attempt at such a language that combined the strengths of something like Java/.NET with the strengths of PL/SQL.
case class Person(name: String, age: Int)
inline def onlyJoes(p: Person) = p.name == "Joe"
// run a SQL query
run( query[Person].filter(p => onlyJoes(p)) )
// Use the same function with a Scala list
val people: List[Person] = ...
val joes = people.filter(p => onlyJoes(p))
// Or, after defining some typeclasses/extension methods
val joesFromDb = query[Person].onlyJoes.run
val joesFromList = people.onlyJoes
This integrates with a high-performance functional programming framework/library that has a bunch of other stuff like concurrent data structures, streams, an async runtime, and a webserver[1][2]. The tools already exist. People just need to use them.[0] https://github.com/zio/zio-protoquill?tab=readme-ov-file#sha...
Has she tried raising prices? To signal that her product is highly quality and thus more expensive than her competition?
Some customers WANT to pay a premium just so they know they’re getting the best product.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
I assume all software is shit in some fashion because every single software license includes a clause that has "no fitness for any particular purpose" clause. Meaning, if your word processor doesn't process words, you can't sue them.
When we get consumer protection laws that require that software does what is says on the tin quality will start mattering.
All of those were marketed as just-barely-affordable consumer luxury goods. The physical design and the marketing were more important than the specs.
Apple's aesthetic is more important than the quality (which has been deteriorating lately)
Their software quality itself is about average for the tech industry. It's not bad, but not amazing either. It's sufficient for the task and better than their primary competitor (Windows). But, their UI quality is much higher, and that's what people can check quickly with their own eyes and fingers in a shop.
Race to the bottom
Capitalism? Marx's core belief was that capitalists would always lean towards paying the absolute lowest price they could for labor and raw materials that would allow them to stay in production. If there's more profit in manufacturing mediocrity at scale than quality at a smaller scale, mediocrity it is.
Not all commerce is capitalistic. If a commercial venture is dedicated to quality, or maximizing value for its customers, or the wellbeing of its employees, then it's not solely driven by the goal of maximizing capital. This is easier for a private than a public company, in part because of a misplaced belief that maximizing shareholder return is the only legally valid business objective. I think it's the corporate equivalent of diabetes.
But that shifted later, with Milton Friedman, who pushed the idea of shareholder capitalism in the 70s. Where companies switched to thinking the only goal is to maximize shareholder value.
In his theory, government would provide regulation and policies to address stakeholder's needs, and companies therefore needed focus on shareholders.
In practice, lobbying, propaganda and corruption made it so governments dropped the ball and also sided to maximize shareholder value, along with companies.
Do you drive the cheapest car, eat the cheapest food, wear the cheapest clothes, etc.?
I'm thinking out loud but it seems like there's some other factors at play. There's a lower threshold of quality that needs to happen (the thing needs to work) so there's at least two big factors, functionality and cost. In the extreme, all other things being equal, if two products were presented at the exact same cost but one was of superior quality, the expectation is that the better quality item would win.
There's always the "good, fast, cheap" triangle but with Moore's law (or Wright's law), cheap things get cheaper, things iterate faster and good things get better. Maybe there's an argument that when something provides an order of magnitude quality difference at nominal price difference, that's when disruption happens?
So, if the environment remains stable, then mediocrity wins as the price of superior quality can't justify the added expense. If the environment is growing (exponentially) then, at any given snapshot, mediocrity might win but will eventually be usurped by quality when the price to produce it drops below a critical threshold.
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
The key point to understand is the only effort that matters is that which makes the sale. Business is a series of transactions, and each individual transaction is binary: it either happens or it doesn't. Sometimes, you can make the sale by having a product which is so much better than alternatives that it's a complete no-brainer to use it, and then makes people so excited that they tell all their friends. Sometimes you make the sale by reaching out seven times to a prospect that's initially cold but warms up in the face of your persistence. Sometimes, you make the sale by associating your product with other experiences that your customers want to have, like showing a pretty woman drinking your beer on a beach. Sometimes, you make the sale by offering your product 80% off to people who will switch from competitors and then jacking up the price once they've become dependent on it.
You should know which category your product fits into, and how and why customers will buy it, because that's the only way you can make smart decisions about how to allocate your resources. Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that." But if you are sitting on one of those gold mines, capitalizing on it effectively is orders of magnitude more efficient than trying to market a product that doesn't really work.
This. Per your example, this is exactly what it was like when most of us first used Google after having used AltaVista for a few years. Or Google Maps after having used MapQuest for a few years. Google invested their resources correctly in building a product that was head and shoulders above the competition.
And yes, if you are planning to sell beer, you are going to need the help of scantily clad women on the beach much more than anything else.
We're still trying to figure out the marketing. I'm convinced the high failure rate of restaurants is due largely to founders who know how to make good food and think their culinary skills plus word-of-mouth will get them sales.
Doesn't that depend on your audience? Also, what do you mean by quality?
Where I live, the best food can lead to big success. New tiny restaurants open, they have great food, eventually they open their big successor (or their second restaurant, third restaurant, etc.).
Also, one should not confuse the quality of the final product and the quality of the process.
You cannot make a cheap product with high margins and get away with it. Motorola tried with the RAZR. They had about five or six good quarters from it and then within three years of initial launch were hemorrhaging over a billion dollars a year.
You have to make premium products if you want high margins. And premium means you’re going for 10% market share, not dominant market share. And if you guess wrong and a recession happens, you might be fucked.
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
https://en.m.wikipedia.org/wiki/The_Market_for_Lemons
The Market for Lemons idea seems like it has merit in general but is too strong and too binary to apply broadly, that’s where I was headed with the suggestion for another name. It’s not that people want low quality. Nobody actually wants defective products. People are just price sensitive, and often don’t know what high quality is or how to find it (or how to price it), so obviously market forces will find a balance somewhere. And that balance is extremely likely to be lower on the quality scale than what people who care about high quality prefer. This is why I think you’re right about the software market tolerating low quality; it’s because market forces push everything toward low quality.
A differentiator would be having the ability to have a higher than average quality per cost. Then maybe you're onto something.
It's really hard to reconcile your comment with Silicon Valley, which was built by often expensive innovation, not by cutting costs. Were Apple, Meta, Alphabet, Microsoft successful because they cut costs? The AI companies?
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
Starting an OSS product - write good docs. Got a few enterprise people interested - “customer success person” is most important marketing you can do …
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
Developers do care about performance up to a point. If the software looks to be running fine on a majority of computers why continue to spend resources to optimize further? Principle of diminishing returns.
Software is always sold new. Software can increase in quality the same way cars have generally increased in quality over the decades. Creating standards that software must meet before it can be sold. Recalling software that has serious bugs in it. Punishing companies that knowingly sell shoddy software. This is not some deep insight. This is how every other industry operates.
Just through pure Darwinism, bad software dominates the population :)
The user cannot but a good AI might itself allow the average user to bridge the information asymmetry. So as long as we have a way to select a good AI assistant for ourselves...
In the end it all hinges on the users ability to assess the quality of the product. Otherwise, the user cannot judge whether an assistant recommends quality products and the assistant has an incentive to suggest poorly (e.g. sellout to product producers).
The AI can use tools to extract various key metrics from the product that is analysed. Even if we limit such metrics down to those that can be verified in various "dumb" ways we should be able to verify products much further than today.
> And one of them is the cheapest software you could make.
I actually disagree a bit. Sloppy software is cheap when you're a startup but it's quite expensive when you're big. You have all the costs of transmission and instances you need to account for. If airlines are going to cut an olive from the salad why wouldn't we pay programmers to optimize? This stuff compounds too.We're currently operate in a world where new features are pushed that don't interest consumers. While they can't tell the difference between slop and not at purchase they sure can between updates. People constantly complain about stuff getting slower. But they also do get excited when things get faster.
Imo it's in part because we turned engineers into MBAs. Wherever I ask why can't we solve a problem some engineer always responds "well it's not that valuable". The bug fix is valuable to the user but they always clarify they mean money. Let's be honest, all those values are made up. It's not the job of the engineer to figure out how much profit a big fix will result in, it's their job to fix bugs.
Famously Coke doesn't advertise to make you aware of Coke. They advertise to associate good feelings. Similarly, car companies advertise to get their cars associated with class. Which is why sometimes they will advertise to people who have no chance of buying the car. What I'm saying is that brand matters. The problem right now is that all major brands have decided brand doesn't matter or brand decisions are always set in stone. Maybe they're right, how often do people switch? But maybe they're wrong, switching seems to just have the same features but a new UI that you got to learn from scratch (yes, even Apple devices aren't intuitive)
Right now, the market buys bug-filled, inefficient software because you can always count on being able to buy hardware that is good enough to run it. The software expands to fill the processing specs of the machine it is running on - "What Andy giveth, Bill taketh away" [1]. So there is no economic incentive to produce leaner, higher-quality software that does only the core functionality and does it well.
But imagine a world where you suddenly cannot get top-of-the-line chips anymore. Maybe China invaded Taiwan and blockaded the whole island, or WW3 broke out and all the modern fabs were bombed, or the POTUS instituted 500% tariffs on all electronics. Regardless of cause, you're now reduced to salvaging microchips from key fobs and toaster ovens and pregnancy tests [2] to fulfill your computing needs. In this world, there is quite a lot of economic value to being able to write tight, resource-constrained software, because the bloated stuff simply won't run anymore.
Carmack is saying that in this scenario, we would be fine (after an initial period of adjustment), because there is enough headroom in optimizing our existing software that we can make things work on orders-of-magnitude less powerful chips.
[1] https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
[2] https://www.popularmechanics.com/science/a33957256/this-prog...
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
Edit: 1. Assuming open source doesn't persevere
Rapid development is creating a race towards faster hardware.
https://en.wikipedia.org/wiki/2_nm_process
https://en.wikipedia.org/wiki/International_Roadmap_for_Devi...
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
It's obvious for both cases where the real priorities of humanity lie.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
But right now, 8-9/10 game developers and publishers are deeply concerned with cash and rather unconcerned by technical excellence or games as a form of interactive art (where, once again, Guerrilla and many other Sony studios are).
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
Yes, it's possible and very simple. Lower the frequency (dramatically lowers power usage), fewer cores, few threads, etc. The problem is, we don't know what we need. What if a great new apps comes out (think LLM); you'll be complaining your phone is too slow to run it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
It's not even a trade off a lot of the time, simpler architectures perform better but are also vastly easier and cheaper to maintain.
We just lack expertise I think, and pass on cargo cult "best practices" much of the time.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
Perhaps a classic case where a guideline, intended to help, ends up causing ill effects by being religiously stuck to at all times, instead of fully understanding its meaning and when to use it.
A simple example comes to mind, of a time I was talking to a junior developer who thought nothing of putting his SQL query inside a loop. He argued it didn't matter because he couldn't see how it would make any difference in that (admittedly simple) case, to run many queries instead of one. To me, it betrays a manner of thinking. It would never have occurred to me to write it the slower way, because the faster way is no more difficult or time-consuming to write. But no, they'll just point to the mantra of "premature optimisation" and keep doing it the slow way, including all the cases where it unequivocally does make a difference.
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
You're right that the crux of it is that the only thing that matters is pure user value and that it comes in many forms. We're here because development cost and feature set provide the most obvious value.
Very, very little.
If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.
I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.
He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.
None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.
Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.
I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.
You have to be either clueless or delusional if you really believe that.
People simply do not care about the rest. So there will be as little money spent on optimization as possible.
Also HN: Check this new AI tool that consumes 1000x more energy to do the exact same thing we could already do, but worse and with no reproducibility
I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Evidence: DeepSeek
2. The consumers that have the money to buy software/pay for subscriptions have the newer hardware.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
And with client side software, compute costs approach 0 (as the company isn’t paying for it).
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
https://nitter.poast.org/ID_AA_Carmack/status/19221007713925...
Most of it?
Software bloat > Causes: https://en.wikipedia.org/wiki/Software_bloat#Causes
Program optimization > Automated and manual optimization: https://en.wikipedia.org/wiki/Program_optimization#Automated...
>software is getting slower more rapidly than hardware is becoming faster.
You always optimise FOR something at the expense of something.
And that can, and frequently should, be lean resource consumption, but it can come at a price.
Which might be one or more of: Accessibility. Full internationalisation. Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties). Readability/maintainability. Displays that can actually represent text correctly at any size without relying on font hinting hacks. All sorts of subtle points around UX. Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.) Etc.
Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.
Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.
We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.
The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.
AndrewDucker•9h ago
tgv•9h ago
xyzzy123•9h ago
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
HappMacDonald•7h ago
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
esperent•9h ago
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
skydhash•8h ago
Workaccount2•6h ago
skydhash•6h ago
cosmic_cheese•8h ago
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
victorbjorklund•9h ago
skydhash•8h ago
inetknght•8h ago
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
3036e4•7h ago
inetknght•7h ago
I hear arguments like this fairly often. I don't believe it's true.
Instead of having a job writing a pointless rewrite, you might have a job optimizing software. You might have a different career altogether. Having a job won't go away: what you do for your job will simply change.
franktankbank•7h ago
knowitnone•6h ago
MattSayar•5h ago
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
pier25•4h ago
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
xondono•4h ago
For me this is a clear case of negative externalities inflicted by software companies against the population at large.
Most software companies don’t care about optimization because they’re not paying the real costs on that energy, lost time or additional e-waste.
criticalfault•3h ago
conductr•2h ago