It seems like you'd need some sort of fairly radical control structure (say, no board, just ai interacting directly with shareholders) to get around this. But even this ignores that the automation is not neutral, it is provided by actors with incentives.
Could be good, but could also be bad if it turns out the AI is able to be even more ruthless in how it treats its workforce.
The good news is that it doesn't need to be very accurate in order to beat the performance of most execs anyways.
Where "very often" means "almost never?"
Every time the LLM CEO gets caught doing a crime and goes to 'jail', the LLMs on the exec board can vote to replace it with another instance of the same LLM model.
Forget 'limited liability', this is 'no liability'.
Because building psychopathic AI's is - at the moment - still frowned upon.
Lots of people have legal obligations.
In this case, I assume that in this case you're referring to a fiduciary duty (i.e. to act in the best interests of the company), which is typically held not by the CEO, but but by the directors.
Ultimately the responsibility to assign daily operation of the company rests with the board, both legally and practically, as does the decision to use a human or AI CEO.
More practically, legal accountability would be placed in the individuals approving LLMs actions and/or the entity providing the LLM service. The latter aspect being why many AI vendor deals fall through. Because everything is awesome until the contract comes and the vendor wants to take no responsibility for anything that results from their product.
The entire job is almost entirely human to human tasks: the salesmanship of selling a vision, networking within and without the company, leading the first and second line executives, collaborating with the board, etc.
What are people thinking CEOs do all day? The "work" work is done by their subordinates. Their job is basically nothing but social finesse.
So, writing emails?
"Hey, ChatGPT. Write a business strategy for our widget company. Then, draft emails to each department with instructions for implementing that strategy."
There, I just saved you $20 million.
I think that you don't appreciate that charismatic emails are one of the few things that modern AI can do better than humans.
I wouldn't trust ChatGPT to do my math homework, but I would trust it to write a great op-ed piece.
If it were this easy, you could have done it by now. Have you?
In order to save $20 million dollars with this technique, the first step is to hire a CEO who gets paid $20 million dollars. The second step is to replace the CEO with a bot.
I confess that I have not yet completed the first step.
Although I think it's more likely that we're going to enter an era of fully autonomous corporations, and the position of "CEO" will simply no longer exist except as a machine-to-machine protocol.
You can't think of a single difference in the nature of the job of artist/musician vs. lawyer vs. business executive?
Everyone is indispensable until they aren't.
That applies to every call to replace jobs with current-gen AI.
But I can't think of a difference between CEOs and other professions that works out in favor of keeping the CEOs over the rest.
I can think of plenty, but none that matter.
As the AI stans say, there is nothing special about being human. What is a "CEO?" Just a closed system of inputs and outputs, stimulus and response, encased in wetware. A physical system that like all physical systems can be automated and will be automated in time.
They both need social finesse and CEO’s don’t need a body.
You might as well ask why people don’t use AI pickup coaches.
My dad had a manager (who was a VP) that he privately nicknamed "VPGPT", because despite being a very polite and personable guy he pretty much knew nothing about the engineering he was ostensibly managing, and basically just spoke in truisms that sounded kind of meaningful unless you do any kind of analysis on them.
I'm not saying that AI would necessarily be "better", but I do kind of hate how people who are utterly incapable of anything even approaching "technical" end up being the ones making technical decisions.
It is good, that CEOs also get some of this "You will be replaced by AI!" flak, that we hear from CEOs of big tech directed at developers. Do those CEOs think their job is more complex than a software developer job, which they are so eager to replace? How many times more urgently do we want to replace the CEO, considering salaries? How about we put as many times the amount of money into that, as we are putting into trying to replace developers?
In the end neither will work out any time soon, judging current "AI"'s actual AI level. I think for that we still need some 2-3 architectural leaps forward. And by that I don't mean simply building bigger ANNs and ingesting more data. It already seems like the returns for that are rapidly diminishing.
You'll easily find people preaching or selling that sort of thing on Twitter, and the sort of people who are still on Twitter are probably buying it.
(Probably mentally unhealthy people, but still it happens!)
Also, I think it misses the critical point. C-suite executives operate under immense pressure to deliver abstract business outcomes, but the lack of clear, immediate feedback loops and well-defined success metrics makes their roles resistant to automation. AI needs concrete reward functions that executive decision-making simply doesn't provide.
An even more interesting one is: What will we reward?
We've been rewarding labor quantity, as well as quality via higher wages - as motivation and as incentives for more education. This reflected the productivity primacy of knowledge work in modern economies, but that might not be the case down the road.
We've also been rewarding capital. Originally this was a way for the elites to keep themselves in place (a.k.a. economic rents), but in modern times it's been more of an entrepreneurial incentive (a.k.a. economic profits.)
Without the economic profit rationale, there's no reason to reward capital accumulation. Only pro-profit decisions are good for society, pro-rent decisions are awful. If there's no profit to incentivize, capitalism is just bad all around.
If AI becomes a better profit decision-maker than an entrepreneur, any humans left in the loop are nothing but high-rollers gambling with everyone else's money.
Don't steal this idea it's mine I'm going to sell it for a million dollars.
For the soft CEO skills, not so much.
Not that that's a deal-breaker. I have a vision of an AI CEO couched as a "strategic thought partner," which the wet-CEO just puppets to grease the skids of acceptance among the employees.
I'd fully trust an AI CEO's decision making, for a predictable business, at least. But some CEOs get paid a lot (deservedly so) because they can make the right decisions in the thick fog of war. Hard to get an AI to make the right decision on something that wasn't in the training corpus.
Still, business strategy isn't as complex as picking winners in the stock market.
I think an AI could be strong at a few skills, if appropriately chosen:
- being gaslightingly polite while firmly telling others no;
- doing a good job of compressing company wide news into short, layperson summaries for investors and the public;
- making PR statements, shareholder calls, etc; and,
- dealing with the deluge of meetings and emails to keep its subordinates rowing in the same direction.
Would it require that we have staff support some of the traditional soft skills? Absolutely. But there’s nothing fundamentally stopping an AI CEO from running the company.
If you've ever worked at a company that's a chaotic shitshow, you'll know how strong the effect of the CEO is - it always comes down to the guy at the top not being up to it.
The leverage of the role is enormous, and the strength of someone who can carry out this role well for a large company is sky high - not many such people in the world, and they only need one.
So the math all comes out very straightforward: even at obscene looking salaries, they're still a bargain.
It’s been tried before, it didn’t work out well.
but as someone who has the honor of working witha really good ceo i can definitely say that you cannot automate them. maybe in some streamlined corporate machine like ibm or something, but not in a living growing company
It would be an interesting experiment to promote an executive assistant to CEO though.
My business experience is that company culture is very important to a company’s success and I’m just doubtful that this can be created through AI.
This is how successful American propaganda is. 39% of people believed something that definitionally could never be true.
So you will find people who make average salaries defending the stratospheric salaries of CEOs because they believe they'll one day be the one benefitting or they've fallen for some sort of propaganda such as the myth of meritocracy or prosperity gospel.
Our entire economy is designed around exploiting working people and extracting all of their wealth to a tiny portion of the population. And we're reachign the point where the bottom 50% (if not more) have nothing left to exploit.
Ai and automation could be used to improve all of our lives. It isn't and it won't be. It'll be used to suppress wages and displace workers so this massive wealth transfer can be accelerated.
I get the point of the article. But those with the wealth won't let themselves be replaced by AI and seemingly the populace will never ask the question of why they can't be replaced until economic conditiosn deteriorate even further.
It's not that difficult to get into the top 1%. Most Americans earn a top 1% income. Even the top 1% of America is only a salary of around $500k. It's possible 19% of survey takers were in the top 1%, or were on a path to make that in the future.
I don't see how it's definitionally untrue to believe you could make $500k a year at some point...Let alone $34,000 a year...
1% of Americans earn a top 1% income. They weren't being asked "do you make more than an amputee kid in Gaza?"
> It's possible 19% of survey takers were in the top 1%…
There's a whole field of math devoted to preventing this. Polling works quite well, all things considered.
But more relevant is the top 1% of net worth is currently ~$11.6M [1], which is vastly more unattainable.
Also, the net worth of the bottom 99% is skewed by house prices. You might be sitting on a house worth $1M but when every other house also costs $1M and you have to live somewhere, you don't really have a net worth of $1M.
[1]: https://finance.yahoo.com/news/among-wealthiest-heres-net-wo...
I don't know how that particular poll was worded, but in general if your a politician who rails against the top 1%, you might suffer from the fact that people have widely varying conceptions of who the 1% are.
Whatever the merits of the argument here (and my bolshie side has also flippantly pushed it in the past) the motivation and thrust of the essay needs to be considered in that ideological grounding.
I swear there’s a joke or cautionary tale here somewhere about “first they came for..” or something along those lines. The phrasing escapes me.
Maybe the problem isn’t that you can’t automate a CEO, it’s that the actual tangible work just isn’t worth as much as some companies pay for it, and this thread it touching a few too many raw nerves.
Well, either way it’s hilarious.
Anything that removes the power of CEOs and gives it to the worker should be highly encouraged. Economic democracy is the final frontier of human empowerment and giving workers then means to have democratic control over the economy can only unlock more human potential, not less.
Except replacing CEOs with AIs will not do this.
It won't make the companies worse run, why would workers want to destroy their means to live? CEOs do this with no skin in the game, the workers should take that skin as they will always be better stewards than the single tyrant.
Where is your evidence that companies won't be worse run? Workers could just vote to give themselves massive raises and hemorrhage the company, ironically like how some private equity firms operate but en masse. No one would start companies in this sort of scenario thereby causing the economy to fall, especially in comparison to companies that don't have this sort of voting system for companies.
Pretty sure the moment you do this, the workers liquidate the company and distribute the assets among themselves, as evidenced by the acceptance rate of voluntary severance offers in many past downsizings, such as the Twitter one.
The investors can organize the government bailouts themselves. You don't need a CEO.
If anything, I would argue that the strategic decisions actually can be automated/performed via broader consensus. With that handled, all that's left is the cartel that CEOs have invented to justify their exhorbant pay packages.
A CEO's job is (roughly) to maximize a company's valuation. It is not to run the company themselves, not to be nice, not to improve the world. I'm not claiming this is what _should_ be, just how it _is_. By this metric, I think Musk has done really well in his role.
Edit: Tangentially related -- at the end of the musical "Hadestown", the cast raise their glasses to the audience and toast "to the world we dream about, and the one we live in today." I think about that a lot. It's so beautiful, helps enforce some realism on me, and makes me think about what I want to change with my life.
It's called "lying to customers and investors".
> And the bad-person strategy works well for him.
Worked. Tesla is not doing that well recently. Others are a bit better.
s/operates/operates on/
So they are a surgeon? Wouldn't be surprised at the damage they cause, conidering the business results of so many companies.
CEO compensation is determined by board committees often made up of other CxOs. They write letters to each other's shareholders about how valuable CEOs are to build up the edifice.
I wish my compensation were determined by fellow engineers who "truly know my worth". I'd pay it forward if I were on a committee determining colleague's pay packets.
The entire point of an MBA is networking for executive roles.
The secret sauce is execution.
Hired CEOs are there to execute a board vision.
Made CEOs are there to execute their vision.
Except, Elon has been largely successful at being the "CEO" of these companies because he attracts talent to them. So...either:
(1) Businesses still will require human talent and if so, I don't see how an AI bot will replace a CEO who is necessary to attract human talent.
OR
(2) Businesses don't require any human talent beneath the CEO and thus a CEO is still necessary, or at least some type of "orchestrator" to direct the AI bots beneath them.
Maybe he’s just that good at what he does?
Every year I feel a bit less crazy in my silly armchair speculation that the Second Renaissance from the Animatrix is a good documentary. If AI "takes over" it will be via economic means and people will go willingly until they have gradually relinquished control of the world to something alien. (Landian philosophers make the case that hyperstitional capitalism has already done this)
I would take the over that this will happen sooner than later -- when it's proven to make a lot of money to have an AI CEO, suddenly everyone will change their tune and jump on the bandwagon with dollar signs in their eyes, completely ignoring what they are giving up.
Except unlike e.g. the metaverse/cryptocurrency bandwagon of yesteryear, there's no getting off.
if (marketCrash) then sendEmailToGovernmentAskingForBailout();
The real job is done behind the curtain. Picking up key people based on their reputation, knowledge, agency, and loyalty. Firing and laying off people. Organizational design. Cutting the losses. Making morally ambiguous decisions. Decisions based on conversations that are unlikely to ever be put into bytes.
chasil•1h ago