I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
I think just having LSP give you answers 2x faster would be great for staying in flow.
Applies to git operations as well.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
The days when 30 seconds pauses for the compiler was the slowest part are long over.
It gets ridiculous quickly, really.
And don't get me started on the cloud ERP software the rest of the company uses...
https://github.com/rui314/mold?tab=readme-ov-file#why-is-mol...
The larger point is the fastest may not be faster for your workload so benchmark before spending money. Your workload may be different.
I've seen a test environment which has most assets local but a few shared services and databases accessed over a VPN which is evidently a VIC-20 connected over dialup.
The dev environment can take 20 seconds to render a page that takes under 1 second on prod. Going to a newer machine with twice the RAM bought no meaningful improvement.
They need a rearchitecture of their dev system far more than faster laptops.
There’s your problem. If your expectation was double-digit milliseconds in prod, then non-prod and its VPN also wouldn’t be an issue.
You do need a good SSD though. There is a new generation of pcie5 SSDs that came out that seems like it might be quite a bit faster.
It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.
Then your actual physical computer is just a dumb terminal.
Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.
In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.
I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.
The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.
Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.
I wonder if it holds for ARM in general?
* Apple: 32 cores (M3 Ultra)
* AMD: 96 cores (Threadripper PRO 9995WX)
* Intel: 60 cores (W‑9 3595X)
I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.
For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.
Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.
Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.
M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)
Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.
On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).
In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.
In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)
It’s not that it’s worse than a “real” desktop chip. In a way it’s better you get almost comparable performance with way lower power usage.
Also the M4 Max has worse MT performance than e.g. the 14900k which is architecture ancient in relative terms and also costs a fraction
So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.
To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.
Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.
To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.
Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.
Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.
Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.
Don’t worry, they’ll tell you
I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.
Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.
1 or 2 bed gamer things
1) Em-dashes
2) "It's not X, it's Y" sentence structure
3) Comma-separated list that's exactly 3 items long
>3) Comma-separated list that's exactly 3 items long
Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.
>2) "It's not X, it's Y" sentence structure
This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").
I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.
"AI" (Plus) Chromebooks?
They eventually became so cheap they blanket paused refreshing developer laptops...
Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.
All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).
Apple have long thought that 8Gb ram is good enough for anything, and will continue to for some time now.
So people started slacking off, because "you have to love your employees"?
Equality doesn't have to mean uniformity.
At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.
Turns out I get paid the same either way.
Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.
So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.
I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.
The developer integration tests don’t need to run on a low spec machine. That is not needed.
Where did this idea about spiting your fellow worker come from?
That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).
It depends what you're working on. My work laptop is 5 years old, and it takes ~4 minutes to do a clean compile of a codebase I work on regularly. The laptop I had before that (which would now be around 10 years old) would take ~40 minutes to compile to the same codebase. It would be completely untenable for me to do the job I do with that laptop (and indeed I only started working in the area I do once I got this one).
You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.
You want speed limits not speed bumps. And they should be pretty high limits...
After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.
Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.
It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.
Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.
> And of course myself and my team took the blame for not preparing ahead of time.
If your initial request was not logged and then able to be retrieved by yourself in defence, then I would say something is very wrong at your company.
But regardless, I already left there a few years back.
You are suggesting a level of due process that is wildly optimistic for most companies. If you are an IC, such blame games are entirely resolved behind closed doors by various managers and maybe PMs. Your manager may or may not ask you for supporting documentation, and may or may not be able to present it before the "retrospective" is concluded.
For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.
XKCD: https://xkcd.com/1205/
Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.
Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.
Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.
> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,
You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.
Some people see a company policy as something meant to be exploited until a hidden limit is reached.
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.
This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.
It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.
Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.
This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).
This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.
Managers didn’t care. It didn’t come out of their budget.
It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.
When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.
If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.
> So if you are astonished that people optimize for their financial gain, that’s concerning.
I’m not “surprised” nor “astonished” nor do you need to be “concerned” for me. That’s unnecessarily condescending.
I’m simply explaining how these generous policies come to and end through abuse.
You are making a point in favor of these policies: Many will see an opportunity for abuse and take it, so employers become more strict.
The idea that a company offering food in some capacity can be seen as generous is, at best, confusing and possibly naïve. A company does this because it expects such a policy will extract more work for less pay. There is no benevolence in the relationship between a company and an individual — only pure, raw self-interest.
In my opinion, the best solution is not to offer benefits at all, but simply to overpay everyone. That’s far more effective, since individuals then spend their own money as they choose, and thus take appropriate care of it.
Yes, but some also have a moral conscience and were brought up to not take more than they need.
If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.
I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.
As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.
I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.
If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.
There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.
I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.
Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.
If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.
Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.
These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.
Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.
That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.
Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.
It can be a win for both sides for the employees to work an extra 30-90 minutes and have some team bonding and to feel like they’re getting a good deal. (Source: I did this for years at a place that comp’d dinner if you worked more than 8 hours AND past 6 PM; we’d usually get more than half the team staying for the “free” food.)
I have worked in places where the exact opposite of what you describe happens. As OP says, people just stop working at 6 and just start reading reddit or scrolling their phones. No team bonding and chat because everyone is wiped out from a hard day. Just people hanging around, grabbing their food when it arrives, and leaving.
We too had more than half the team staying for the “free” food, but they definitely didnt do much work whilst they were there.
I'm making the case that mandatory unpaid overtime is effectively wage theft. It is legal in the US because half of jobs there are "exempt" from the usual overtime protections. There's no ethical reason for that, just political ones.
At any rate, I think people who want to crack down on meal expenses out of a sense of justice should get at least as annoyed by employers taking advantage of their employees in technically allowed ways.
If an employee or team is not putting in the effort desired, that's a separate issue and there are other administrative processes for dealing with that.
A better option is for leadership to enforce culture by reinforcing expectations and removing offending employees if need be to make sure that the culture remains intact. This is a time sync, without a doubt. For leadership to take this on it has to believe that the unmeasurable benefit of a good company culture outweighs the drag on leadership's efficiency.
Company culture is will always be actively eroded in any company and part of the job of leadership is to enforce culture so that it can be a defining factor in the company's success for as long as possible.
peanuts compared to their 500k TC
I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.
Exactly. I personally have never been in a meeting which I thought was absolutely necessary. Except maybe new fire regs.
Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?
I paid a premium for my home height-adjustable desk because the frame and top are made in America, the veneer is much thicker than competitors, the motors and worm gears are reliable, and the same company makes coordinating office furniture.
The same company sells cheap imported desks too. Since my work area is next to the dining table in my open-plan apartment, I considered the better looks worth the extra money.
If someone's unstable motorized desk tips over and injures someone at the office, it's a big problem for the company.
A cheap desk might have more electrical problems. Potential fire risk.
Facilities has to manage furniture. If furniture is a random collection of different cheap desks people bought over the years they can't plan space without measuring them all. If something breaks they have to learn how to repair each unique desk.
Buying the cheapest motorized desk risks more time lost to fixing or replacing it. Saving a couple hundred dollars but then having the engineer lose part of a day to moving to a new desk and running new cables every 6 months while having facilities deal with disposal and installation of a new desk is not a good trade.
It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.
But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.
Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.
Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.
Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.
Alex St. John Microsoft Windows 95 era, created directX annnnd also built an alien spaceship.
I dimly recalled it as a friend in the games division telling me about some someone getting 5 and a 1 review scores in close succession.
Facts i could find (yes i asked an llm)
5.0 review: Moderately supported. St. John himself hosted a copy of his Jan 10, 1996 Microsoft performance review on his blog (the file listing still exists in archives). It reportedly shows a 5.0 rating, which in that era was the rare top-box mark. Fired a year later: Factual. In an open letter (published via GameSpot) he states he was escorted out of Microsoft on June 24, 1997, about 18 months after the 5.0 review. Judgment Day II alien spaceship party: Well documented as a plan. St. John’s own account (quoted in Neowin, Gizmodo, and others) describes an H.R. Giger–designed alien-ship interior in an Alameda air hangar, complete with X-Files cast involvement and a Gates “head reveal” gag. Sunk cost before cancellation: Supported. St. John says the shutdown came “a couple of weeks” before the 1996 event date, after ~$4.3M had already been spent/committed (≈$1.2M MS budget + ≈$1.1M sponsors + additional sunk costs). Independent summaries repeat this figure (“in excess of $4 million”).
So: 5.0 review — moderate evidence Fired 1997 — factual Alien spaceship build planned — factual ≈$4M sunk costs — supported by St. John’s own retrospective and secondary reporting
Nor how either translates to being a bad hire.
I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?
I've been at startups where there's sometimes late night food served.
I've never been at a startup where there was an epidemic about lying about stolen hardware.
Staying just late enough to order dinner on the company, and theft by the employee of computer hardware plus lying about it, are not in the same category and do not happen with equal frequency. I cannot believe the parent comment presented these as the same, and is being taken seriously.
You can steal $2000 by lying about a stolen laptop or lying about working late. The latter method just takes a few months.
> people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
That doesn't sound like actually working late?
(I still agree with you, though, that this isn't the equivalent of stealing a laptop, even if you do it enough to take home $2,000 worth of dinner.)
Well, it was stolen. The only lie is by whom.
none of it is good lol
gp was talking about salaried employees which is legally exempt from overtime pay. There is no rigid 40-hour ceiling for salary pay.
Salary compensation is typical for white-collar employees such as analysts in investment banking and private equity, associates at law firms, developers at tech startups, etc.
> legally required maximum working hours
Neither of these apply in the context of full-time salaried US investment banking jobs that the parent comment is referring to.
People work these jobs and hours because the compensation and career advancement can be extremely lucrative.
People who worry about things like limiting their work hours do not take these jobs.
Negotiate for better conditions. If agreement cannot be reached, find another job.
Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.
Honestly, they aren't any better than my ikea office chair I stole from my first house when I was a student (and that's been with me for the last 15 years). It has probably costed less than 100 €/$.
Ikea stuff is really underrated in this sense.
(I'm not saying you're wrong. I think the real solution is that people should take better care of their physical selves. Certainly there are also people with particular conditions and do need the more ergonomic setup, but I expect that's a small percentage of the total.)
A chair isn't any answer to poor posture. The answer is exercising your core muscles, being aware of your posture, and constantly correcting it.
Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.
So, if other engineers get their equipment for $6k (beefed-up laptop, 32" or 30" 5k widescreen screen, ergonomic chair, standing desk — in theory amortized over 3-10 years, but really, on the retention period which is usually <3 years in software), we are talking about an increase of $200 on that.
Maybe not peanuts, but the cost of administration to oversee spending and the cost to employees to provide proof and follow due process (in their hourly rate for time used) will quickly add up and usually negate any "savings" from stopping abuse altogether — since now everybody needs to shoulder the cost.
Any type of cap based on average means that those who needed something more special-cased (more powerful machine, more RAM vs CPU/storage, more expensive ergonomic setup due to their anatomy [eg. significantly taller than average]...) can't really get it anymore.
Obviously, having no cap and requiring manager approval is usually enough to get rid of almost all abuse, though it is sometimes important to be able to predict expenses throughout the year.
Essentially, you pay a lot for fancy design.
1. My brothers (I have a number of them) mostly work in construction somehow. It feels most of them drive a VW Transporter, a large pickup or something, each carrying at least $30 000 in equipment.
Seeing people I work with get laptops that use multiple minutes to connect to a postgres database that I connect to in seconds feels really stupid. (I'm old enough that I get what I need, they usually rather pay for a decent laptop rather than start a hiring process.)
2. My previous employer did something really smart:
They used to have a policy that you got a basic laptop and an inexpensive phone, but you could ask for more if you needed. Which of course meant some people got nothing and some people got custom keyboards and what not.
That was replaced with a $1000 budget on your first day an $800 every year that was meant to cover phones and everything you needed. You could alsp borrow from next year. So if someone felt they needed the newest iPhone or Samsung? Fine, save up one year(or borrow from next year) and you have it.
Others like me who don't care that much about phones could get a reasonably priced one + a gpod monitor for my upstairs office at home + some more gear.
And now the rules are the same for everyone so even I get (I feel I'm hopeless when it comes to arguing my case with IT, but now it was a simple: do you have money for it? yes/no)
Yeah, it's hard to convey to people who've never been responsible for setting (or proposing) policy that it's not a game of optimizing the average result, but of minimizing the worst-case result.
You and I and most people are not out to arbitrage the company's resources but you and I and most people are also not the reason policy exists.
It was depressing to run into that reality myself as policy controls really do interfere sometimes in allowing people to access benefits the organization wants them to have, but the alternative is that the entire budget for perks ends up in the hands of a very few people until the benefit goes away completely.
Plus, not to mention the return on investment you get from retaining the talent and the value they add to your product and organization.
If you walk into a mechanic shop, just the Snap On primary tool kit is like 50k.
It always amazes me that companies go cheap on basic tools for their employees, yet waste millions in pointless endeavors.
I think you wanted to say "especially". You're exchanging clearly measurable amounts of money for something extremely nebulous like "developer productivity". As long as the person responsible for spend has a clear line of view on what devs report, buying hardware is (relatively) easy to justify.
Once the hardware comes out of a completely different cost center - a 1% savings for that cost center is promotion-worthy, and you'll never be able to measure a 1% productivity drop in devs. It'll look like free money.
I have a whisper transcription module running at all times on my Mac. Often, I'll have a local telemetry service (langfuse) to monitor the 100s of LLM calls being made by all these models. With AI development it isnt uncommon to have multiple background agents hogging compute. I want each of them to be able to independently build + host and test their changes. The compute load apps up quickly. And I would never push agent code to a cloud env (not even a preview env) because I don't trust them like that and neither should you.
Anything below an M4 pro 64GB would be too weak for my workflow. On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.
This workflow is the first time I have needed multiple screens. Pre-agentic coding, I was happy to work on a 14 inch single screen machine with standard thinkpad x1 specs. But, the world has changed.
AMD's Strix Halo can have up to 128GB of unified RAM, I think. The bandwidth is less than half the Mac one, but it's probably going to accelerate.
Windows doesn't inherently care about this part of the hardware architecture.
I am 100x more expensive than the laptop. Anything the laptop can do instead of me is something the laptop should be doing instead of me.
We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.
At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.
This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.
Anyway, your choices of what to do about idiocy like this are pretty limited.
Why is that abuse? Having many open browser tabs is perfectly legitimate.
Arguably they should switch from Chrome to Safari / lobby Google to care about client-side resource use, but getting as much RAM as possible also seems fine.
P.S. you can buy a satellite monitor often for $10 from the thrift store. The one I bought was $10.
I don't buy used keyboards because they are dirty and impossible to clean.
The outliers will likely be two kinds:
1) People with poor judgement or just an outright fraudulent or entitled attitude. These people should be watched for performance issues and managed out as needed. And their hardware reclaimed.
2) People that genuinely make use of high end hardware, and likely have a paper trail of trying to use lower-end hardware and showing that it is inefficient.
This doesn't stop the people that overspend slightly so that they are not outliers, but those people are probably not doing substantial damage.
> But that abuse is really capped out at a few thousand
That abuse easily goes into the tens of thousands of dollars, even several hundred thousand, even at a relatively small shop. I just took a quick look at Apple's store, and wow! The most expensive 14" MacBook Pro I could configure (minus extra software) tops out at a little over $7,000! The cheapest is at $1,600, and a more reasonably-specced, mid-range machine (that is probably perfectly sufficient for dev work), can be had for $2,600.
Let's even round that up to $3,000. That's $4,000 less than the high end. Even just one crazy-specced laptop purchase would max out your "capped out at a few thousand" figure.
And we're maybe not even talking about abuse all the time. An employee might fully earnestly believe that they will be significantly more productive with a spec list that costs $4,000, when in reality that $3,000 will be more or less identical for them.
Multiply these individual choices out to a 20 or 40 or 60 person team, and that's real money, especially for a small startup. And we haven't even started talking about monitors and fancy ergonomic chairs and stuff. 60 people spending on average $2,000 each more than they truly need to spend will cost $120k. (And I've worked at a place that didn't eliminate their "buy whatever you think you'll need" policies until they had more than 150 employees!)
If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny
I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.
It's not 3x, but it's most certainly above 1.3x. Average for compilation seems to be around 1.7-1.8x.
It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.
* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.
* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.
* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.
* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.
Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.
This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.
I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.
The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.
The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.
The CI system is still slower that my laptop. We are not really concerned about it.
I'm waiting for something to fail because $3000 on a laptop won't make me gain $3000 from my customer.
This is an "office" CPU. Workstation CPUs are called Epyc.
I run 2x48GB ECC with my 9800x3d.
The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.
Their employers made it the culture so that working from home/vacation would be easy.
I've worked fully remotely in a couple of global remote-first company since 2006 for a decade-plus — it was my choice how I wanted to set up my working conditions, with company paying for a "laptop refresh" every 3 years which I did not have to use on a laptop.
But even in the few years before the pandemic, I was where you're talking about, working from home a lot more often, and replacing office-work hours with home-work hours, not just adding extra hours at home to my office-work hours.
I think this still depends a lot on the company, though. I know people who are expected to be in the office for 8+ hours a day, 5 days a week, but still bring their laptop home with them and do work in their off hours, because that's just what's expected of them. But fortunately I also know a lot of people with flexible hours, flexible home/office work, and aren't forced to work much more than 40 hrs/wk.
I hate having more than one machine to keep track of and maintain. Keeping files in sync, configuration in sync, everything updated, even just things like the same browser windows with the same browser tabs, organized on my desktop in the same way. It's annoying enough to have to keep track of all that for one machine. I do have several machines at home (self-built NAS, media center box, home automation box), and I don't love dealing with them, but fortunately I mainly just have to ensure they remain updated, not keep anything in sync with other things.
(I'm also one of those people who gets yelled at by the IT security team when they find out I've been using my personal laptop for work... and then ignores them and continues to do it, because my personal laptop is way nicer than the laptop they've given me, I'm way more productive on it, and I guarantee I know more about securing a Linux laptop and responsibly handling company data than the Windows/Mac folks in the company's IT department. Yes, I know all the reasons, both real and compliance-y, why this is still problematic, but I simply do not care, and won't work for a company that won't quietly look the other way on this.)
I also rarely do my work at a desk; I'm usually on a couch or a comfy chair, or out working in a public place. If all I had was a desktop, I'd never do any work. If I had a desktop in addition to my laptop, I'd never use the desktop. (This is why I sold my personal home desktop computer back in the late '00s: I hadn't even powered it on in over a year.)
> ...why so many devs seem to insist...
I actually wonder if this was originally driven by devs. At my first real job (2001-2004) I was issued a desktop machine (and a Sun Ray terminal!), and only did work at the office. I wouldn't even check work email from home. At my second job (2004-2009), I was given a Windows laptop, and was expected to be available to answer the odd email in my off hours, but not really do much in the way of real work. I also had to travel here and there, so having the laptop was useful. I often left the laptop in the office overnight, though. When I was doing programming at that company, I was using a desktop machine running Linux, so I was definitely not coding at home for work.
At the following job, in 2009, I was given a MacBook Pro that I installed Linux on. I didn't have a choice in this, that's just what I was given. But now I was taking my work laptop home with me every day, and doing work on my off hours, even on weekends. Sneaky, right? I thought it was very cool that they gave me a really nice laptop to do work on, and in return, I "accidentally" started working when I wasn't even in the office!
So by giving my a laptop instead of a desktop, they turned me from a 9-5 worker, into something a lot more than that. Pretty good deal for the company! It wasn't all bad, though. By the end of the '10s I was working from home most days, enjoying a flexible work schedule where I could put in my hours whenever it was most convenient for me. As long as I was available for meetings, spent at least some time in the office, and produced solid work in a timely manner, no one cared specifically when I did it. For me, the pandemic just formalized what I'd already been doing work-wise. (Obviously it screwed up everything outside of work, but that's another story.)
> My best guess is this is mostly the apple crowd...
Linux user here, with a non-Apple laptop.
It is (maybe was) done by XMG and Schenker. Called Oasis IIRC. Yep
I work mostly remote, and also need to jump between office locations and customer sites as well.
Member of Windows/UNIX crowd since several decades.
> If you can justify an AI coding subscription, you can justify buying the best tool for the job.
I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.
Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.
„Public whipping for companies who don’t parallelize their code base“ would probably help more. ;)
Anyway, how many seconds does MS Teams need to boot on a top of the line CPU?
Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again
SharePoint sucked from its inception though.
But I would never say no to a faster CPU!
Considering ‘Geekbench 6’ scores, at least.
So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money.
The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.
Especially when, in that same time frame, your code editor / mail client / instant messaging / conference call / container management / source code control / password manager software all migrate to Electron...
*Intel Core i5-5287U*: - *Single-Core Maximum Wattage*: ~7-12W - *Process Node*: 14nm - *GB6 Single Core *: ~950
- *Apple M4*: - *Single-Core Maximum Wattage*: ~4-6W - *Process Node*: 3nm - *GB6 Single Core *: ~3600
Intel 14nm = TSMC 10nm > 7nm > 5nm > 3nm
In 10 years, we got ~3.5x Single Core performance at ~50% Wattage. i.e 7x Performance per Watt with 3 Node Generation improvements.
In terms of Multi Core we got 20x Performance per Watt.
I guess that is not too bad depending on how you look at it. Had we compared it to x86 Intel or AMD it would have been worst. I hope M5 have something new.
Care to provide some data?
Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...
I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with
Ping me when some builds this :)
Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.
[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...
Even better if you want to automate the whole notarization thing you don't have a "nice" notarize-this-thing command that blocks until its notarized and fails if there's an issue, you send a notarization request... and wait, and then you can write a nice for/sleep/check loop in a shell script to figure out whether the notarization finished and whether it did so successfully. Of course from time to time the error/success message changes so that script will of course break every so often, have to keep things interesting.
Xcode does most of this as part of the project build - when it feels like it that is. But if you want to run this in CI its a ton a additional fun.
Compilation works fine without notarization. It isn't called by default for the vast majority of complications. It is only called if you submit to an App Store, or manually trigger notarization.
The notarization command definitely does have the wait feature you claim it doesn't: `xcrun notarytool ... --wait`.
Ugh. Disgusting. So glad I stopped using macOS years ago. (Even if this isn't actually true... still glad I stopped using Apple's we-know-better-than-you OS years ago.)
It is amazing to me that people put up with this garbage and don't use an OS that respects them more.
Your local dev builds don’t call it or require it.
It’s only needed for release builds, where you want it notorized (required on iOS, highly recommended for MacOS). I make a Mac app and I call the notarization service once or twice a month.
If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.
And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.
(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).
Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.
Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.
Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)
However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.
But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.
When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.
Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...
> a much slower machine
Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.
I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.
[1]: https://news.ycombinator.com/item?id=44501119
Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.
software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.
Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.
I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.
Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.
Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.
I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.
For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.
You've average consumer is using a ultra cheap LCD panel that has no where near the contrast ratio that you are designing your mocks on, all of your subtle tints get saturated out.
This is similar to good audio engineers back in the day wiring up a dirt cheap car speaker to mix albums.
Isn't that the opposite of what's happening?
I have decent audio equipment at home. I'd rather listen to releases that were mixed and mastered with professional grade gear.
Similarly, I'd like to get the most out of my high-end Apple display.
Optimizing your product for the lowest common denominator in music/image quality sounds like a terrible idea. The people with crappy gear probably don't care that much either way.
Nobody is writing slow code specifically to screw over users with old devices. They're doing it because it's the easiest way to get through their backlog of Other Things. As an example, it is a priority for a lot of competitive games, and they perform really well on everything from the latest 5090 to a pretty-old laptop integrated graphics GPU. It's done not because they only hired rockstar performance experts, but because it was a product priority.
My final compiled binary runs much faster than something written in, say, python or javascript, but my oh my is the rust compiler (and rust-analyzer) slow compared to the nonexistent compile steps in those other languages.
But for the most part the problem here isn't developers. It's product management and engineering managers. They just do not make performance a priority. Just like they often don't make bug-fixing and robustness a priority. It's all "features features features" and "time to market" and all that junk.
Maybe make the product managers use 5-year-old mid-range computers. Then when they test the stuff the developers have built, they'll freak out about the performance and prioritize it.
Usually software gets developed to be so fast that people just barely accept it with the computers of their time. You can do better by setting explicit targets like the RAIL model by google. Optimizing any further usually is just a waste of resources.
might be down to the tech culture here, but we don't automatically write the most efficient code either. for a lot of simple projects, these "bad" machines are still capable enough unfortunately.
Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?
Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.
There are also these which look similar https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...
The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:
https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/...
Similar single core performance, less cores, less GPU. Depends what you’re doing.
Also, I've got a gmktec here (cheaper one playing thin client) and it's going to be scrapped in the near future because the monitor connections keep dropping. Framework make a 395 max one, that's tempting as a small single machine.
I'm not very enamoured with distcc style build farms (never seem to be as fast as one hopes and fall over a lot) or ccache (picks up stale components) so tend to make the single dev machine about as fast as one can manage, but getting good results out of caching or distribution would be more cash-efficient.
You might be better off buying a mini pc if you’re happy with an integrated GPU. There are plenty of Ryzen mini pcs that end up cheaper than building around an itx motherboard.
As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.
I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.
Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.
rotational disks usually top up at ~85 MB/sec, with seek time being up to 12 msec for consumer drives and ~6 msec for enterprise drives (15k rpm).
ssd could saturate the sata bus and would top at 500-550 MB/sec, with essentially no seek time. latency wold be anything between 50 and 250 microseconds (depending on the operation).
nvme disks instead can sometimes full utilise a pci-e lane and reach multiple gigabytes/second in sequential read (ie: pci-e gen5 nvme disks can peak at 7 GB/sec) with latencies as low as 10–30 microseconds for reads and 20–100 microseconds for writes.
As compiling the kernel meant/means doing a lot of i/o on small files, you can see why disk access is a huge factor.
A friend of mine did work on LLVM for his PhD... The first thing he did when he got funding for his phd was getting a laptop with as much memory as possible (i think it was 64gb on a dell workstation, at the time) and mount his work directory in tmpfs.
Also disabling building the _dbg package for the Debian build will significantly speed things up. Building that package takes a strangely long amount of time.
Even my dual 7402 with 96 threads and 512 GiB of RAM can't compile a maximal Linux config x86 build in RAM in under 3 minutes.
What I find helps repeated builds is maintaining a memcache server instance for sccache.
ETA: After dropping caches it took 18m40s to build Ubuntu mainline 6.16.3 with gcc-14 on my 5950X, with packages for everything including linux-doc, linux-rust-lib, linux-source, etc. I'd expect the same operation to take about 11m on a 9950X.
It's also not clear whether OP used Clang or GCC to build the kernel. Clang would be faster.
I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.
6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.
It blew my mind. Truly this is more complicated than trading software.
That won't help Angular, because its design doesn't lend itself to such speedups. The compiler produces a change detector function for every reactive structure such a class field, so the final app is necessarily huge.
And an evergreen bit of advice. Nothing new to see here, kids, please move along!
There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop. Remote dedicated servers work, but the latency is killing your productivity, and it is pricey if you want a server with a lot of disk space.
This makes me remember so many years ago starting to program on a dual core plastic MacBook.
Also, I’m very impressed by one of my coworkers working on 13 inch laptop only. Extremely smart. A bigger guy so I worry about his posture and RSI on such a small laptop.
TLDR I think more screen space does not scale near linearly with productivity
And I'm not particularly concerned with "abusing" my laptop. I paid for its chips and all of its cores, I'm not going to not use them...
or the battery, it does not like high temperature either.
you only regret it when it happens. and it will happen.
I dunno... I've been doing this a long time. And haven't had any of the failures you're talking about. You're aware these machines throttle performance as necessary to prevent from getting too hot?
I also like having things Just So; and keeping windows and browser tabs and terminals etc. all in the same place on several different machines would drive me nuts.
I have one 3:2 aspect ratio 13" laptop, and I do everything on it. My wife had two large external screens for work that she doesn't use anymore, but I never use them. I like the simplicity of staring at one single rectangle, and not having extraneous stuff around me.
> There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop.
Not sure I buy that. My laptop has 20 cores and 64GB of RAM, and sure, maybe a full build from scratch will peg all those cores, but for incremental builds (the 99% case), single core perf is what really matters, and a home server isn't going to give me meaningful gains over my laptop there.
And my laptop can very easily handle the "strain" of my code editor with whatever developer-assistance features I've set up.
Sure, CI jobs run elsewhere, but I don't need a home server for that; the VPS that hosts the git repository is fine.
Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.
Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.
Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.
I think currently, that build guide doesn't apply based on what's going on with GPUs. Was valid in the past, and will be valid in the future, I hope!
(As far as work goes, I realize this directly contradicts the OP's point, which is the intent. If you know your workflow involves lots of compiling and local compute, absolutely buy a recent Threadripper. I find that most of the time the money spent on extra cores would be better spent on a more modest CPU with more RAM and a faster SSD. And more thoughtful developer tooling that doesn't force me to recompile the entire Rust work tree and its dependencies with every git pull.)
I also do a lot of rust compiling (Which you hinted at), and molecular dynamics sims leveraging a mix of CUDA/GPU, and thread pools + SIMD.
Edit: Took a look at AMD's lineup and realized they did something I got conditioned not to expect: they've maintained AM5 socket compatibility for 3 generations in a row. This makes me far more likely to upgrade the CPU!
https://www.amd.com/en/products/processors/chipsets/am5.html
> all AMD Socket AM5 motherboards are compatible with all AMD Socket AM5 processors
I love this. Intel was known to change the socket every year or two basically purely out of spite, or some awful marketing strategy. So many wasted motherboards.
It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.
Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.
Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.
For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).
I think it’s reasonable to state that the exceptions here are very exceptional.
Also, a language with a GC (not Java) would shine there, it's ideal for a turn based game.
If you are gaming... than high core count chips like Epyc CPUs can actually perform worse in Desktops, and is a waste of money compared to Ryzen 7/Ryzen 9 X3D CPUs. Better to budget for the best motherboard, ram, and GPU combo supported by a specific application test-ranked CPU. In general, a value AMD GPU can perform well if you just play games, but Nvidia rtx cards are the only option for many CUDA applications.
Check your model numbers, as marketers ruined naming conventions:
https://www.cpubenchmark.net/multithread/
https://www.videocardbenchmark.net/high_end_gpus.html
Best of luck, we have been in the "good-enough" computing age for some time =3
Video processing, compression, games and etc. Anything computationally heavy directly benefits from it.
From dual pentium pros to my current desktop - Xeon E3-1245 v3 @ 3.40GHz built with 32 GB top end ram in late 2012 which has only recently started to feel a little pokey, I think largely due to cpu security mitigations added to Windows over the years.
So that extra few hundred up front gets me many years extra on the backend.
Higher power draw means it runs hotter, and it stresses the power supply and cooling systems more. I'd rather go a little more modest for a system that's likely to wear out much, much slower.
Anyway I've never regretted buying a faster CPU (GPU is a different story, burned some money there on short time window gains that were marginally relevant), but I did regret saving on it (going with M4 air vs M4 pro)
IIRC, running the base frequency at 3.9Ghz instead of 3.5GHz, yield a very modest performance boost but added 20% more power consumption and temperature.
I then underclocked it to 3.1Ghz and the thing barely ran at more than 40°C under load and power consumption was super low! The performance was more than mediocre though...
I still run a 6600 (65W peak) from 2016 as my daily driver. I have replaced the SSD once (MLC lasted 5 years, hopefully forever with SLC drive from 2011?), 2x 32GB DDR4 sticks (Kingston Micron lasted 8 years, with aliexpress "samsung" sticks for $50 a pop) and Monitor (Eizo FlexScan 1932 lasted 15! years RIP with Eizo RadiForce 191M, highly recommend with f.lux/redshift for exceptional quality of image without blue light)
It's still powerful enough to play any games released this year I throw at it at 60 FPS (with a low profile 3050 from 2024) let alone compile any bloat.
Keep your old CPU until it breaks, completely... or actually until the motherboard breaks; I have a Kaby Lake 35W replacement waititng for the 6600 to die.
Yes you can do everything, but not without added complexity, that will end up failing faster.
We have peaked in all tech. Nothing will ever get as good as the raw peak in longevity:
- SSDs ~2011 (pure SLC)
- RAM ~2013 (DDR3 fast low latency but low Hz = cooler = lasts longer)
- CPUs ~2018 (debatable but I think those will outlast everything else)
My guess is that the most long lived computer gen could be one that still uses through hole components. Not a very useful machine by any metric though I bet.
I don't know about most people, but how long a wafer of silicon keeps working past its obsolescence, is just not that important
http://move.rupy.se/file/radxa_works.mp4
Or in a uConsole.
Also Risc-V tablet:
http://move.rupy.se/file/pinetab-v_2dmmo.png
Not as battle hardened as my Thinkpad X61s but maybe we'll get a GPU driver soon... 3 years later...
cf. the Pi 4: 2–3X CPU performance, full 5Gbps USB 3.0 ports, a PCIe Gen2 x1 connector, dual 4-lane MIPI connectors, support for A2-class SD cards, an integrated RTC...
A dud?? What's the issue? The price?
You can only stream 720p out at 20 FPS from my 2711 though, so it only seems to decode well = watching/consuming media. (the future is producing)
The 2712 can stream out 720p at 40 FPS. (CPU)
The 3588 can stream out 720p at 60+ FPS. (CPU)
Edit: HL2 runs at the "same" FPS on each (sometimes 300+ on 3588)...
3588 is waaaay more performant per watt, close to Apples M1.
The IO has been moved outside the SoC which causes alot of issues.
SD Card speeds are enough for client side use.
See PUBG that has bloated Unreal so far past what any 4-core computer can handle because of anti-cheats and other incremental changes.
Factorio could add some "how many chunks to simulate" config then? If that does not break gameplay completely.
I just "re-cycle" them.
Bought a 7700X two years ago. My 3600X went to my wife. Previous machine (forgot which one it was but some Intel CPU) went to my mother-in-law. Machine three machines before that, my trusty old Core i7-6700K from 2015 (I think 2015): it's now a little Proxmox server at home.
I'll probably buy a 9900X or something now: don't want to wait late 2026/2027 for Zen 6 to come out. 7700X shall go to the wife, 3600X to the kid.
My machines typically work for a very long time: I carefully pick the components and assemble them myself and then test them. Usually when I pull the plug for the final time, it's still working fine.
But yet I like to be not too far behind: my 7700X from 2022 is okay. But I'll still upgrade. Doesn't mean it's not worth keeping: I'll keep it, just not for me.
Thinkpad X61s(45nm) DDR2 / D512MO(45nm) DDR2 / 3770S(22nm) DDR3 / 4430S(22nm) DDR3
All still in client use.
All got new RAM this year and when the SSDs break (all have SLC) I have new SLC SSDs and will install headless linux for server duty on 1Gb/s symmertic fiber until the motherboards break in a way I can't repair. Will probably resolder caps.
Also the 6600 can be passively cooled in the Streacom case I allready have, the 5600 is to hot.
I actually run it ~10% underclocked, barely affects performance, but greatly reduces heat/noise. These cards are configured to deliver maximum performance at any cost (besides system instability).
My next GPU I am probably going mid-range to be honest, these beefy GPUs are not worth it anymore cost and performance-wise. You are better off buying the cheaper models and upgrading more often.
More VRAM, and NVLink (on some models). You can easily run them at lower power limits. I've run CUDA workloads with my dual 3090s set as low as 170W to hit that sweet spot on the efficiency curve. You can actually go all the way down to 100W!
But had to upgrade to 3050 because 2GB VRAM is to little for modern titles.
Fun fact: One 6600 core can saturate the 1030 for skinned mesh animations.
But only saturate the 3050 50% = perfect because the world takes much less CPU (you upload it to the GPU and then it's almost one drawcall; more like one drawcall per chunk but I digress) = OpenGL (ES) can render on it at lower motion-to-photon latency without waste = one core for rendering, one for gameplay, one for physics and one for the OS, audio and networking.
So 14nm 6600 + 8nm 3050 is actually the combination I would use for ever.
HL:A runs at 90 FPS with that combo too on the low res Vive 1.
Not that VR is going anywhere, but still peak demanding application.
Check out the new AMD RX 7400 https://youtu.be/jNfmz0BowxM&t=997
And I don't want a desktop.
But I would agree that, when you are purchasing that new laptop (or mainboard), you get the best that you can afford. Pushing your upgrade cycle out even just one year can save you a lot of money, while still getting the performance you need.
Based on this, I strongly believe that if you're providing hardware for software engineers, it rarely if ever makes sense to buy anything but the top spec Macbook Pro available, and to upgrade every 2-3 years. I can't comment on non desktop / non-mac scenarios or other job families. YMMV.
a faster machine can get me to productive work faster.
You’re a grownup. You should know when to take a break and that’ll be getting away from the keyboard, not just frittering time waiting for a slow task to complete.
Tying this back to your point, those limited hours of focus time come in blocks, in my experience, and focus time is not easily "entered", either.
As an ISV I buy my own hardware so I do care about expenses. I can attest that to me waiting for computer to finish feels like a big irritant that can spoil my programming flow. I take my breaks whenever I feel like and do not need a computer to help me. So I pay for top notch desktops (within reason of course).
Configuring devices more generously often lets you get some extra life out of it for people who don’t care about performance. If the beancounters make the choice, you’ll buy last years hardware at a discount and get jammed up when there’s a Windows or application update. Saving money costs money because of the faster refresh cycle.
My standard for sizing this in huge orgs is: count how many distinct applications launch per day. If it’s greater than 5-7, go big. If it’s less, cost optimize with a cheaper config or get the function on RDS.
For me, personally, a better break is one I define on my calendar and helps me defragment my brain for a short period of time before re-engaging.
I recommend investigating the concept of 'deep work' and drawing your own conclusions.
In reality, you can’t even predict time to project completion accurately. Rarely is a fast computer a “time saver”.
Either it’s a binary “can this run that” or a work environment thing “will the dev get frustrated knowing he has to wait an extra 10 minutes a day when a measly $1k would make this go away”
On the laptop you need: - low weight so you can easily take it with you to work elsewhere - excellent screen/GPU - multiple large connected screens - plenty of memory - great keyboard/pointer device
Also: great chair
Frankly, what would be really great is a Mac Vision Pro fully customised as a workstation.
God I wish my employers would stop buying me Macbook Pros and let me work on a proper Linux desktop. I'm sick of shitty thermally throttled slow-ass phone chips on serious work machines.
No, those are not the same. There's a reason one's the size of a pizza box and costs $5k and the other's the size of an iPad and costs $700.
And yes, I much prefer to build tower workstations with proper thermals and full-sized GPUs, that's the main machine at their desk, but sometimes they need a device they can take with them.
Sadly, the choice is usually between Mac and Windows—not a Linux desktop. In that case, I’d much prefer a unix-like operating system like MacOS.
To be clear, I am not a “fanboy” and Apple continues to make plenty of missteps. Not all criticisms against Apple are well founded though.
You seem like a reasonable person that can admit there’s some nice things about Apple Silicon even though it doesn’t meet everyone’s needs.
Oh, and the vastly superior dekstop rig will also come out cheaper, even with a quality monitor and keyboard.
Nice assumptions though.
It’s not just my opinion that Apple silicon is pretty performant and efficient for the form factor; you can look up the stats yourself if you cared to. Yet, it seems you may be one of those people that is hostile towards Apple for less well-founded reasons. It’s not a product for everyone, and that’s ok.
Gave developers 16GB RAM and 512MB storage. Spent way too much time worrying about available disk space and needlessly redownloading docker images off the web.
But at least they saved money on hardware expenses!
I've had the misfortune of being in a phone signal dead spot at times in my life.
On slow connections sites are not simply slow, but unusable whatsoever.
So it wasn't uncommon to see people with a measly old 13" macbook pro doing the hard work on a 64cpu/256GB remote machine. Laptops were essentially machines used for reading/writing emails, writing documents and doing meetings. The IDEs had a proprietary extensions to work with remote machines and the custom tooling.
I nearly went insane when I was forced to code using Citrix.
Both, depending on the case and how much you were inclined to fiddle with your setup. And on what kind of software you were writing (most software had a lot of linux-specific code, so running that on a macbook was not really an option).
A lot of colleagues were using either IntelliJ or VScode with proprietary extensions.
A lot of my work revolved around writing scripts and automating stuff, so IntelliJ was an absolutely overkill for me, not to mention that the custom proprietary extensions created more issues than they solved ("I just need to change five lines in a script for christ's sake, i don't need 20GB of stuff to do that")... So ended up investing some time in improving my GNU Emacs skills and reading the GNU Screen documentation, and did all of my work in Emacs running in screen for a few years.
It was very cool to almost never have to actually "stop working". Even if you had to reboot your laptop, your work session was still there uninterrupted. Most updates were applied automatically without needing a full system reboot. And I could still add my systemd units to the OS to start the things i needed.
Also, building onto that, I later integrated stuff like treemacs and eglot mode (along with the language servers for specific languages) and frankly I did not miss much from the usual IDEs.
> I nearly went insane when I was forced to code using Citrix.
Yeah I can see that.
In my case I was doing most of my work in a screen session, so I was using the shell for "actual work" (engineering) and the work macbook for everything else (email, meetings, we browsing etc).
I think that the ergonomics of gnu emacs are largely unchanged if you're using a gui program locally, remotely or a shell session (again, locally or remotely), so for me the user experience was largely unchanged.
Had i had to do my coding in some gui IDE on a remote desktop session I would probably have gone insane as well.
However, VScode and Zed (my editor of choice) both have pretty decent inbuilt SSH/SFTP implementations so you can treat remote code as if it was local painlessly and just work on it.
best money ever spent. lasted years and years.
for cpus - I wonder how the economics work out when you get into say 32 or 64 core threadrippers? I think it still might be worth it.
That sounds horrible to work on.
The i7-4770 was one. It reliably outperformed later Intel CPUs until near 10th gen or so. I know shops that are still plugging away on them. The first comparable replacements for it is the i7-12700 (but the i5-12400 is a good buy).
At 13th gen, Intel swaps E for P cores. They have their place but I still prefer 12th gen for new desktops.
Past all that, the author is right about the AMD Ryzen 9950x. It's a phenomenal chip. I used one in a friend's custom build (biz, local llm) and it'll be in use in 2035.
Per which benchmarks?
> At 13th gen, Intel swaps E for P cores.
One nit, Intel started adding (not swapping) E-cores to desktop parts with 12th gen, but i3 parts and most i5 parts were spared. More desktop i5 parts got them as starting with 13th gen.
Computers are like lightbulbs, and laptops are the extra fragile kind. They burn out. I've never had one more than five years, and after year three I just assume it could fail at any moment -- whether it's an SSD crash, a swollen battery, or drivers breaking after the next OS update.
If you replace machines every three years, like I do, you're not necessarily paying for performance -- you're really just paying for peace of mind.
As for Desktop vs Laptop, that is relevant too. Desktops are typically much faster than Laptops because they are allowed much larger power envelopes, which leads to more cores and higher clock speeds for sustained periods of time. However, there is always a question as to whether your use case will be able to use all 16/32 cores/threads in a 9950X CPU. If not, you may not notice much difference with a smaller processor.
Source for CPU benchmarks: https://www.cpubenchmark.net/compare/6211vs5031vs3862vs5717/...
Ryzen 9 7900x + 128GB ram + 6900XT + arch yet Cursor IDE still manages to be incredibly slow and freezing up after re-focusing. Even base VSCodium is many times faster than that. I can have AAA game + kernel build + 100 Firefox tabs active at the same time, yet a piece of JS makes the hardware seem garbage.
I always recommend people set an honest limit to what they're willing to spend and buy the best that limit can afford.
3 years ago, I bought a 12900k secondhand because it was near the fastest single core and I basically never max out on multi core.
Every time I think about upgrading, I check the benchmarks and decide it's not worth the hassle.
Even now, almost 4 years after the 12900k's release, it's only 13% slower than the very fastest $$$ processors on single core (9950X).
I'm ignoring the Intel Core Ultras because I don't need those problems in my life.
miohtama•5mo ago
mgaunard•5mo ago
Certainly not ahead of the curve when considering server hardware.
nine_k•5mo ago
Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.
Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.
alt227•5mo ago
On top of that when you look at price vs performance they are way behind.
Apple may have made good strides in single core cpu performance, but they have definitely not made killer development machines imo.
cornholio•5mo ago
cycomanic•5mo ago
It's not like objective benchmarks disproving these sort of statements don't exist.
alt227•5mo ago