There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.
Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.
I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.
I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.
I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
Something about youth being wasted on young.
Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.
There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.
For those who love what they do, the word "burnout" doesn't exist for them.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
More than enough is too much :)
Obvious priorities there.
This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
I don't think the takeaway was meant to really be about capitalism but more generally the complexity of the system. That's just me though.
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
[1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiol...
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
its impossible to get benefit from the woods if youve brought a bug net, and you should stay out rather than ruining the woods for everyone
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
An actual offering made to the public that can be paid for.
Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.
Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc. Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]
Some points that stood out to me:
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
The only two people Altman listens to are Peter Thiel and Bill Gates. So yeah, keep posting those messages.
The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US aren't buying it. We're still in shock after everything revealed about OpenAI in the book Empire of AI.
this does not sound fun lol
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
CloseAI.
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.
I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.
Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.
I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.
Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.
It's not impossible in the same way our current understanding indicates FTL travel or time travel is.
In this formulation, it’s pretty much as impossible as time travel, really.
OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.
The goal is corporate punishment, not the rule of the law.
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)
FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.
Having said that, the entire article reads more like a puff piece than an honest reflection.
I liked my jobs and bosses!
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.
in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!
it was however interesting to know that it isn't just Meta poaching OpenAI, but the reverse also happened.
Any gibberish on any company's behalf of "poaching" is nonsense regardless IMO.
The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?
These are the questions that come to mind after Hao’s historic summary.
That's what he did at Segment even in the later stages.
> It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.
This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.
Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.
It's more likely that he was there to see how OpenAI was run so he could learn and something similar on his own after.
My criticism is that that's a detail that is being obscured and instead other explanations for leaving are being presented (cynically IMO).
There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.
:-)
This is a very interesting nugget, and if accurate this could become their Achilles heel.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
as an early stage founder, i worry about the following a lot.
- changing directions fast when i lose conviction - things breaking in production - and about speed, or the lack of it
I learned to actually not worry about the first two.
But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.
What definition of AGI is used at OpenAI?
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).
Edit: And that's to say nothing of the very generous pay...
I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
... then the next paragraph
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
not if you're trying to replace therapists with chatbots, sorry
OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.
Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers
In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..
Is it considered young / early career in this field?
I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.
The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.
It's completely legit to say you'd never take a job where this could be an expectation.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
People look at it as a cost a and nothing else.
I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.
Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.
To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:
--- start quote ---
The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.
--- end quote ---
Replace that with OpenAI
[1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...
Seems like an awful place to be.
>...
>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.
I'm kind of surprised people are still drinking this AGI Koolaid
That’s ok.
Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
Grok be like. okey. :))
a research manager there coauthored this under-hyped book: https://engineeringideas.substack.com/p/review-of-why-greatn...
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
Nobody ever hypothesized it before it happened? Hard to believe.
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
Discounting Chinese labs entirely for agi seems like a misstep though. I find it hard to believe there won’t be at least a couple contenders
I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.
dagorenouf•7h ago
Reubend•7h ago
istjohn•6h ago
bink•6h ago
lucianbr•5h ago
I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.
Can't believe there's so little critique of this post here. It's incredibly self-serving.
torginus•2h ago