I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
I think just having LSP give you answers 2x faster would be great for staying in flow.
Applies to git operations as well.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
The days when 30 seconds pauses for the compiler was the slowest part are long over.
It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.
Then your actual physical computer is just a dumb terminal.
Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.
In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.
I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.
Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.
I wonder if it holds for ARM in general?
* Apple: 32 cores (M3 Ultra)
* AMD: 96 cores (Threadripper PRO 9995WX)
* Intel: 60 cores (W‑9 3595X)
I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.
For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.
Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.
To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.
Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.
Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.
Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.
Don’t worry, they’ll tell you
I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.
I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.
If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny
I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.
It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.
* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.
* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.
* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.
* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.
Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.
This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.
I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.
The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.
The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.
This is an "office" CPU. Workstation CPUs are called Epyc.
The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.
> If you can justify an AI coding subscription, you can justify buying the best tool for the job.
I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.
Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.
„Public whipping for companies who don’t parallelize their code base“ would probably help more. ;)
Anyway, how many seconds does MS Teams need to boot on a top of the line CPU?
Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again
But I would never say no to a faster CPU!
Considering ‘Geekbench 6’ scores, at least.
So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money.
The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.
Care to provide some data?
Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...
I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with
Ping me when some builds this :)
Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.
[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...
If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.
And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.
Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.
Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.
Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?
Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.
There are also these which look similar https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...
The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:
https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/...
As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.
miohtama•3h ago
mgaunard•3h ago
Certainly not ahead of the curve when considering server hardware.
nine_k•3h ago
Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.
Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.
cornholio•3h ago
cycomanic•1h ago
It's not like objective benchmarks disproving these sort of statements don't exist.