The center-justified serif text is new, though.
https://daringfireball.net/linked/2025/05/21/sam-and-jony-io
and gruber is stirring up drama about why his links don't do well on hn.
I wonder how it is possible that a human watches that video and will think they don't convey pretension
The photo, the text, the video where Sam nearly looks like CGI, and then the quotes at the bottom really make for a full package of cringe.
You truly couldn't make this up, it's so beyond parody that I don't even know what to say. It's so palpably psychotic.
I felt physically sick from second-hand embarrassment watching this.
And I’ll be happy that I don’t have to explain Fremdschämen anymore. Everything has its upsides.
or Schadenfreude.
* Jony Ive has built a company with the densest collection of talent in the world
* OpenAI is spending 10 figures to buy a company from Ive
* It is not the aforementioned company with the dense collection of talent; it's instead a company that no one has heard of
Secondly, is it a weird Sora-stitchedc video? It feels like they just filmed their parts separately like they're not even talking to each other/interacting with each other. Very peculiar.
If you're going to make any sort of hardware you definitely want to tout that it was designed "with love" by "the iPod guy."
It uses the same overhype playbook from the Segway launch: "Oh, I used the [unnamed, unexplained] device and it was the most amazing thing in the whole of human history!" "This object will cause the entire planet to be redesigned around it!"
It’s cameras, speakers, microphones but no display.
I saw a presentation within the last year showcasing AR emoji-like things. Not that emojis are a killer feature, but the tech is there.
For what they are I’ll give them props for a nicely designed product, the charging case is clever and works well. I liked them for music with the Apple Watch, pretty slick combination. Maybe if I could stomach giving a llama bot access to email and calendar etc etc to have a real personal assistant it would be an attractive offering in a world that accepts being watched 24/7 by AI/billionaire overlords
I share this general point of view but take it further: I really want something in this direction (a quality AI assistant that can access my communications and continuously see and hear what I do) but it MUST be local and fully controlled by me. I feel like Meta is getting closest to offering what I'm looking for but I would never in a million years trust them with any of my data.
My wife has the first-gen raybans and they're great for taking photos and video clips of e.g. our kids' sporting events and concerts, where what it's replacing is a phone held up above the crowd getting in the way of the moment. But even with that I feel icky uploading those things to Meta's servers.
Or at least on par with all the other ones.
It's long past time for enhanced privacy regulation in the North American market because these products are going to be wildly invasive as people depend on them to mediate their experience with the world. I don't know what the right answer, and I am very much aware that building products like these that don't focus on monetizing user interaction and advertising would likely mean that they are priced out for lower income users, but I hope someone smarter than me can figure it out :S
This! So much this! If a product from these companies could make my life 1,000,000x better I would still be in “thanks but no thanks crowd”
If they were so obviously bad at the time, how did they get to market?
The humane launch video features two founders that look like they were forced to participate by their hostage takers
I'm not sure what "they" is here (Humane, Rabbit, or late-Ive-era Apple designs).
In all cases though there were plenty of people sounding the alarm. Both Humane and Rabbit were made fun of (wasn't in Humane's demo that the AI was completely wrong about guess the amount of almonds or the calories?). As for Apple products it was a common refrain that they were being made thin at the cost of ports/cooling/etc. How did Apple keep doubling down on the butterfly keyboard _years_ after it was well known it was a bad design?
Also, "The markets can stay irrational longer than you can stay solvent." (re: how did they get to market). You can do anything if you set enough money on fire, no matter how many people are telling you it's a bad idea.
Not sure why he deserves to be a legend, to be honest, but yes, he is a legend.
He did a good job, but those small and minimalistic designs were only possible because of the efforts of entire teams of engineers, of which the public never heard anything.
You'll note that Macbooks don't quite look the same after Ive left and his influence went away.
How are they different ?
For a while, you could get the thermals a bit more tolerable by undervolting them, but then the plundervolt mitigations blocked that.
(Typing this comment from a Lenovo X1 Extreme sitting on a cooling pad, sat next to an X1 Carbon that we can't use because it throttles too much. :)
Regarding your X1, tweaking Linux kernel parameters and downvolting a bit can work wonders in terms of reaching an pleasant heat : performance ratio. Obviously, Lenovo should have taken care of this. However, they release so many different machines that it's hard for them to pay attention to details.
I replaced the thermal paste with some of that PTM stuff which helped a bit, but not enough. I also found that for some reason it tends to BDPROCHOT-throttle when powered through the official Thunderbolt 4 Workstation Dock, even though it’s meant to be 230W and provides power separately to the USB port - but using the standalone AC adapter when docked fixes that.
Ultimately, until there are some decent X86-64 laptops released, the choice is between slow, thin and quiet vs less slow, but big, heavy and noisy. AMD is a bit better than Intel but still weak on mobile and nowhere near as good as the current Apple offerings.
On another note, why are PC manufacturers still putting fan intakes on the bottom. Maybe it’s theoretically more efficient, but tell that to my users who always do things like resting their laptop on a book then wondering why their Zoom screen sharing goes jittery.
That's not to say your situation is unique...there are probably many machines out there that have not had problems, including one owned by my wife. But there are also an unusually high number of machines that did.
"Cost"
I'm a native English speaker and nobody told me this (and I didn't manage to pick it up) until I was nearly 40. "Cost"'s past tense is also "cost."
There's another, newer, largely fatuous, verbed "cost" that means "to calculate the cost of something." That's the one that gets used in the past tense ("the projects have all been costed.")
"I've costed a keyboard replacement for my computer, and the total is more than the computer cost in the first place."
I also find it awkward and uncomfortable to use, but that might just be me.
It's like saying great architects aren't great, it's the construction workers who should get the credit.
Many forms of saying it or a very similar statement. If only these words transformed into something beneficial in the minds of flying air castle designers.
Unfortunately the progressives have been pushing the downplaying of powerful people quite hard for a long time under the guise of equality, so it’s more widespread than just HN. Even more unfortunately, equality is also one of the main ideas of communism. It’s how the government can get rid of dissenters and thus move power to itself. That’s why Marc Andreessen in the Lex Fridman podcast talked about how the government told them that they could give up their startup because it was already decided which companies would be allowed to operate. That’s not capitalism. And Marc knows it that’s why he felt he had to speak up.
But then these days lefty and righty parts of the Western English-language internet are all polarized and beating on common enemies is part of their conversational language. I think for a while HN was small enough that it resisted this polarization but at its current size there's no escaping it.
No, you made that up. It's like saying great architects are not the sole cause of the things they make.
> it's the construction workers who should get the credit
Don't you think the construction workers should get some of the credit?
IMO you still deserve credit. And in fact you still get credit. But that credit comes in the form of monetary reward and (hopefully) recognition from your team and peers, rather than in the form of fame.
All of which… seems sensible to me? Hard to imagine it working otherwise. Interestingly, the movie industry has normalized "end credits" which play after a movie ends, and which lists literally everyone involved, which is quite cool. But the effect is still the same, the people up top get 99.99% of the credit.
(Ofc the "system" is imperfect, and fame/credit can be gamed by good marketers. But it's also not a "system" that any one party invented, it's just sort of an organic economy of attention at work.)
> Hard to imagine it working otherwise.
No it isn't! It's very easy to imagine crediting people in a different ratio than we happen to do now. You are seeing what it looks like - people mythologise their heroes, and then other people come in and say "they didn't do it all, you know". People are literally doing it, in front of you, in this thread. How can it be hard to imagine?
What I'm saying is that it's not realistic. Humans are wired to remember and share highly specific things, especially names. It's been like this since the dawn of time -- the Illiad is about Achilles, not all the nameless soldiers. So this seems to be the natural order of things, rather than something designed, or something easy to change. And it makes sense, because it's practical -- our memories are limited. You can put everyone's names in the credits, but that doesn't mean they'll be remembered and shared.
Brilliant logic. And no, the original comment wasnt' saying "give the engineers some credit", it was saying the engineers deserve the credit instead of Ive.
Which is idiotic and common of smug, self-important programmers.
He got stuff wrong too, don’t get me wrong, but I have yet to see another CEO (heck any business person of note) with the same pattern of deep understanding of how those things intersected as well as he did
The Lisa, the Newton, NeXT computers, trying to dump Pixar pretty much right before they made it big right as the tech was finally catching up to their ideas.
The reality is Jobs got to roll the dice a bunch of times, and if you get to roll the dice a lot, you will have some wins. Looking only at the wins is not useful.
He wasn't always right, as I said already, but he was far better than most at this. More importantly, he was far better at most at getting others to shave their vision down to the simplest of ideas.
If you look at the competitors to Apple or NeXT during their respective eras, they were not very thoughtful in their deliberations.
It doesn't mean every idea he had was successful either, but I'm speaking specifically to the fact he intersected the three points extremely well. At a certain point, someone is good enough at something its more than luck
6.5B / 55 = $118 million per engineer
not a cheap aquihire
If similar hardware was:
- released by apple or google and deeply integrated with android/iOS
- embedded inside apple watch / pixel watch
- embedded inside slim airpods case that could be wear as pendant
- apple had siri as good as gemini and very good local STT to reduce latency
- MCP clients got more adopted by integrated in smartphones AI assistants
then it could be a hit. They lost because they shipped to early, without having big pockets for long game, without owning android OS / iOS and charged big price + subscription for this gadgets.
I think google currently is the best positioned to pull seamless experience with pixel devices (smartphone, watch, buds, gemini)
So one must infer based upon employees, monies, and other non-marketing intel.
No, we can't see it yet. And there's not much description, either. Just that it's the "coolest technology that the world will have ever seen."
Altman: "We have like magic intelligence in the cloud. If I wanted to ask ChatGPT something right now, about something we had talked about earlier, think about what would happen. I would like reach down, I would get on my laptop, I'd open it up, I'd launch a web browser, I'd start typing, and I'd have to like explain that thing, and I would hit enter, and I would wait and I would get a response. And that it as the current limit of what a current laptop can do."
The above is very r/wheredidthesoda go but it hints at the product being ambient computing related.
Hard pass.
It would have to be true AGI before I’d even consider that and that’s consider.
Why do we seem determined to will the Corporate Rim into existence from the Murderbot diaries.
Altman apparently doesn't know what he's competing against. Not a good sign.
I wonder how much of this is downstream from them not being able to convert to a for profit and giving sam a slug of equity
How is Sam Altman doing so much self dealing while being an OpenAI exec?
It's probably some form of glasses with ChatGPT on it but obvious glazing, pomp and ceremony of this announcement talking directly to Apple.
Apple has 1 year to respond.
ive has a proven track record of conceptualizing and delivering category defining products. isn't that exactly the skill set that would be called for in this case? if not, what criteria would you apply?
Imagine I told you I had a top tier developer, they built this amazing system but in the last 5-10 years their ideas have not panned out and been actively harmful/misguided. Does that sound like someone you want to hire? Yes, they did amazing things in the past but recent history tells a different story.
It might be one thing if Ive left Apple and started turning out just amazing products, but that has not been the case.
This is on top of blanket legal protections that already exist in case you didn't want to do your own DD, like duty of loyalty, care and fiduciary; SEC disclosures, AD @ the DoJ, FTC, etc.
https://www.pnas.org/doi/10.1073/pnas.2010939117#:~:text=Alt...
Are you saying that they're saying something stupid? The vast majority of companies are regulated by non-investors; and when companies are regulated by people who are also investors, we think it is a problem rather than a requirement for the regulators to have an opinion.
I am terrified of this company making any products
Literal translation is “my snot-nosed kid / my little shit”
https://neolurk.org/wiki/%D0%9C%D0%BE%D0%B9_%D0%BF%D0%B5%D0%...
Altman's got none of that (well, except the asshole part) - no vision, no taste, no concept of what a user would want, no real belief in humanity or desire to make things for humans. Ive and Altman together is going to be a disaster.
Interesting choice of metaphor.
I don't disagree. Lennon + McCartney were able to fill in bridges, suggest lyrics, etc.
I've always been bothered by Ive's form-over-function though. Or perhaps it is too easy to call out a designer's very public mistakes when on the whole he has done well. For all I know it was Jobs that pushed the design choices that I dislike.
But just to iterate a couple things I dislike: the round mouse on the iMac (obv.), connectors on the back of the modern iMacs (that uncomfortable scratching sound when you're trying to find the USB slot and grate against anodized aluminum)....
You wonder, did he actually use the thing or just admire looking at it?
Or Jony to like the $6.5B headline, but realize that he is probably only getting $150M.
This is on the back of epic wealth inequality.
I don't know why Jony Ive is seen to be worth billions, but he's obviously not, he's just another shill in the ivory tower.
Agreed. If it was the only option it would have been worth it to Apple to pay him billions to _leave_. His last 5-10 years at Apple were marked by him ruining a number of products.
And what exactly is that device going to do that the iPhone (and smartphones in general) can't already do with at a minimum a few small tweaks to the existing flagships?
In the right context vertical integration can make sense, but hardware is a big stretch for OpenAI right now. They haven't even really pinned down the consumer software angle yet.
10 mins I spent watching that weird video in third person dialogue no less, they better deliver something great.
It's 6.5 billion in monopoly money. OpenAI has an insane valuation right now, and they're using it to buy stuff with it, as they should.
I just want a competent personal assistant on speed dial I can talk to in private.
Its been over for almost 2 decades so not sure what the point of calling that out is.
I do love HN at times, I really do.
In my case smart phones never started, I did and do find the form factor aggravating for everything but phone calls and reading but they aren’t remotely over.
Guess the Earth is over because in a few billion years the Sun will expand.
https://www.google.com/search?q=technology+adoption+curve https://ca.indeed.com/career-advice/career-development/what-...
If you’d have said saturated I may have agreed but the adoption curve flattens because everyone who will adopt it has adopted it.
By that (and your apparent) definition of over, houses are over, microwaves are over, cookers are over, electricity is over…
honestly that sounds ripe for disruption
When cars came along the horse was over but the carriage bit is still going strong!
Billions will be spent to realize that screens are useful.
Altman is desperately trying to use OpenAI's inflated valuation to buy some kind of advantage. Which is why he's buying ads, paying $6.5 billion in stock to Jony Ive, and $3 billion for a VSCode fork created in a few months.
Almost anything makes sense when you see your valuation going to zero unless you can figure something out.
“This is a sign of OpenAI’s weakness”
I think this is the third time I’ve seen this exact comment at the top of a HN post about an OpenAI announcement.There is a weird amount of emotional investment in not wanting OpenAI to win.
Personally, I am just excited to see what the device looks like. The prototype must be good to justify this valuation.
Also its normal backlash - when something gets so popular so fast, you are going to naturally have some haters.
Lastly actions speak louder than words. OpenAI used to talk about AGI and Super AI and nuclear launch codes and national security. Now they are buying VS Code forks and ad companies.
The AI race is more than heating up and Sama knows it and he's throwing some hail mary's in hopes to keep OpenAI near or at the top.
What happens to OpenAI competitors that can't make similar moves is another question.
You cannot determine it's a waste if the effort isn't completed, and if you have no insight into their progress.
You also seem to be implying in your comment that the orion glasses displayed at connect last year were a last minute pivot, which is a ludicrous statement
Ive's company is going to make some forgettable, overpriced, and easily cloned wearable pendant or something equally irrelevant. Windsurf (and Cursor) will quickly fade into irrelevance as IDEs are once again commoditized by open source.
Paying for Jony doesn’t seem like desperation. Jony has no product that makes money, this is a long term aggressive hardware play. Seemingly to face off with apple.
It feels more like people just want to craft a negative narrative about OpenAI and use the data to fit that
Core to OpenAI's strategy is that they control not just the models, but also the entrypoints to how these models are used. Don't take it from me, this is explicitly their strategy according to internal documents (https://x.com/TechEmails/status/1923799934492606921).
Some important entrypoints are:
- Entrypoints for layman consumers: They already control this entrypoint due to ChatGPT, the app. They have a limited moat here because they are at the whims of the platform owners, primarily Apple and Google. This is why they are purchasing Ive's startup.
- Entrypoints for developers: They acquired Windsurf, and are actively working on cloud development interfaces such as the new codex product.
- Entrypoints for enterprise: They have the codex products as described above, but also Operator, and are actively working on more cloud based agents.
A rebuttal that I anticipate to the above goes something along the lines of this: "If they have so much capital and dev experience, why are they acquiring these businesses instead of building internal competitors? This is a demonstration of their failure to execute"
The current AI boom is one of the most competitive tech races that has ever occurred. It is because of this, and particularly because they are so well capitalised that it makes sense to acquire instead of build. They simply cannot afford to waste time building these products internally if they can purchase products much further along in their development, and then attach them to their capital and R&D engine
Which, when you think about it, is really kind of sad. They would have been so much better off as a non-profit.
Aside from the news is there anything more to this acquisition than two people effectivly repeating "look at us" on loop?
If it was close at hand, spending precious resources on anything other than pursuing AGI wouldn’t make sense.
It can answer specialized PhD level questions correctly, yet cannot perform tasks that an average 10 year old could. I don't consider that generally intelligent.
But I think that a lot of people also buy into the idea that "text and image data from the web, and from historical chats, is the right/only way to generate the data set required," and it's a dangerous trap to fall into.
But Sam and others have said they see AGI is an uneven process that may not have a clear finish line. The intelligence is spiky and some parts will be superhuman while other parts lag.
It sounds like a CEO moving the goalposts when asked to accomplish something they don't think they can deliver.
We'll be living in a mostly AGI-ish world long before it gets declared. People might not even care about declaring it at that point.
The ball is in the other court - if one is working on AGI, it behooves one to know what one is aiming at (and I'd stake a fair wager that OpenAI et al have at this moment very little better picture of what AGI looks like than you or I)
Music is essentially mathematical. Weakness in math is being addressed by dedicated capabilities that are triggered by mathematical language in prompts, but because these models are actually terrible at math there is no lateral transfer of skill to the domain of music. That's my theory anyway.
i think there's ample evidence to suggest that we're growing closer (3-5 year timeline?) to replacement-level knowledge workers in targeted fields with limited scope. i don't know that i would call that AGI? but i think it's fair to call it close.
thing is that has value, but compute ain't cheap and the value prop there is more of reducing payroll rather than necessarily scaling business ops. this move to me looks like a recognition that generalized AI on it's own isn't a force multiplier as long as you have bottlenecks that make it too pricey to scale activity by an order of magnitude or more.
I'm not sure that matters though—if a technology can give humans what they want exactly when they want it, it doesn't matter if AGI, LLMs, humans, or some other technology is behind that.
Previously discussed on HN:
It'll take some time but we'll get there. Just not as soon as the AI hype will make you believe.
I agree — it may well be a completely different path we need to go down to get to AGI ... not just throwing more resources at the path we've pioneered. As though a moon landing were going to follow Montgolfier's early balloon flights in "about five years".
At the same time, there is suddenly so much attention + money on AI that maybe someone will forge that new path?
"Money is All You Need".
He was neither arrogant nor self-conscious. He treated his hallucinations as if they were the kinds of simple mistakes other people made, like, oops, I thought I understood this but I don't, no different from oops, I forgot my umbrella.
I sometimes wondered if he had a specific condition that made him the way he was, but I never doubted that he was human, with "general intelligence."
Ideally, as one’s intellect matures, one learns to stop doing that, and build coherent reasoning, only speak up when you know what you’re talking about.
Well, ideally. Many people never get to that stage.
That being said, the "apps" that use LLMs coming out now are good. Not AGI good, but they do things, will be disruptive and have value.
And the money coming it could lead to new techniques and eventual AI. For now though, it looks like AI is transitioning into products and figuring out how to lower inference costs.
I think LLMs will become more useful and more efficient over time as models refine but these aren't the (AI) droids you're looking for.
But, for sanity's sake, if we insist on putting 'vision' in there, let's at least call them LVLMs!
These are tentacles that AGI will need.
1) the two decisions do not seem related to each other. OpenAI has capital to spend and is seeking distribution methods to shore up continued access to future capital. That strategic decision seems totally unrelated to their estimated timelines for when AGI (whatever definition you are using) will show up. Especially because they are in a race against other players. It may be a soft signal that more capital is not going to speed up the AGI timeline right now, but even that is a soft signal.
2) I think we already have AGI for any reasonable definitions of the terms 'artificial' 'general' and 'intelligence'. To wit: I can ask Gemini 2.5 a question about basically anything, and it will respond more coherently and more accurately than the vast majority of the human population, about a vast array of subjects.
I do not understand what else AGI could mean.
(In case it matters, I am also an AI researcher, I know many AI researchers, and many-but-not-all agree with me)
I don't know about you, but I learned how to read an analog clock in kindergarten and Gemini got it wrong.
1) Please do me a favor and take the GPQA benchmark. I'm curious to see how you would do. Now go find the nearest kindergartner and ask them to take it. Curious to see how they would do. Maybe random 'ha gotcha!' tasks are not good measures of intelligence?
2) Depending on how you want to measure, the average human is ingesting somewhere between 10 and 100 mb per second. By the time you were in kindergarten (5yo) you would have ingested, conservatively, nearly 2 petabytes of highly multimodal data. Meanwhile, you are comparing against a system that has to understand everything it knows about the world from text (to a first approximation).
3) It seems very strange that reading a clock is a measure of intelligence at all. Unless you think large parts of GenZ are simply not generally intelligent
Also, do you even know what General mean? Gemini can't even tell me what time the library is open today, while even a 3y kid can. So much for "accurately".
If agi is coming, or even another ai as overwhelming as chatgpt is to its prior age, Then investing in all those companies is the Last thing to do. since they'd be leapfrogged by what's coming.
By investing in them one declares that there are no leapfrogs coming. Aka no agi, or even anything close to 10x chatgpt.
With that therefore, the battlefield shifts to being the best middleman. hence all those senseless amounts of money thrown around. For the masses will no longer need to personally seek out God Altman for their top oracling needs, and so someone can come between God and man, capture all value like microsoft did to ibm, and use it to compete building a new God (read: new scam). Rinse repeat.
Use a white background, throw in as many verbs as you can and duck all the money thrown at you I guess.
The next Apple will be the one that creates an AI-first device entirely from scratch. AI lies at the core of everything it does. It's an AI assistant, a friend, another brain. It's not some BS summarizing engine that can't even do simple tasks like copy the name of a song playing on Spotify into Notes.
That's what I think Jony Ive envisions.
Be more specific though: what form factor will such a device be in?
A coffee maker? A phone? Glasses? Cars? A building?
The AI wave seems to be hoping a whole load of hardware revolutions, such as holographic displays, will just appear out of the ether because it fits with their vision of how things should be.
If you can type a half-assed message, and have AI fill in the blanks, or reliably transcribe your voice, that’s a huge improvement to the phone in its current form factor. No reason the screen or interfaces have to undergo a radical transformation
You aren't going to get people giving up their mobile banking apps to carry an AI phone that doesn't quite work, hence the need for it to be something else.
I could imagine AirPods that connect to various screens embedded in the environment, which you temporarily use when next to them. But it's still jot as convenient as a screen in your pocket.
This would cost 50 billion or so. But right now you probably interact with at least 3 or 4 oses per day.
Your TV, has one. Your phone has one, your laptop has one. And if you have voice assistants, they run a 4th distinct OS.
The future will have one OS that shares a session.
Two paths exist. 1. This runs primarily locally aside from a very small amount of data to share the session ( which you can disable). It's completely open source and modifiable.
If you want to roll a 3500$ super PC it'll be just as compatible with the OS as a 200$ one. Writing small automated tasks, everything from just asking with a voice command to wake up jazz,to running a custom C script, will be easy to do.
While I'm dreaming I want a new programming language which supports 3 levels. Plan English instructions ran though an LLM, something like Python and a systems level language like Rust. All "native" programs will be built in this framework.
Now, the negative path is this is all closed source, processed in some data center. "John, I noticed you said to Brian your feet hurt, new running shoes are 30% off , just say the word."
This is the far far more likely outcome. They're going to build an AI that's constantly with you, integrated in every device you own, and it'll all be to sell you stuff.
"Waymo, I would like to go home."
"Sure, but let's stop for milkshakes."
"Waymo, please , I'm tried."
"Understood, I've arranged the milkshakes to be dropped off an your apartment."
This technology could be amazing for accessibility, even real time sign language translation would change the world.
We'll get some of that, but the end goal will always be making as much money as possible. Ultimately selling us crap. Your awake for 16 hours today. You must be monetized every waking second.
Once they figure out how to get the science from Dream Scenario to work I'm sure they monetize sleep too
The absolute best case scenario for an AI-first device at this stage is that it ends up like the Vision Pro, which had a similar mission problem.
The next major device won't be an ad funnel though. It'll give users first class access to the whole pane of glass. Not a managed ads experience at the will of some monopoly platform, but something where the AI serves us instead of being extractive.
The minute we have a broker or agent between us and the "user is the product" services that try to advertise to us and steal our time, it's game over for the old model of revenue. Google, ads, all of it will vanish. There won't be any more selling to me or the rest of the world ever again. You'll have to pay us to get our eyeballs.
Let me clarify: if we have a pane of glass where we run our own agent with our own best interests in mind, then nobody can get through that layer without it being permitted by us.
No more ads.
No more stealthy product placement.
No more paid or featured listings.
It goes further.
No more rage bait, attention bait, low information filler. The annoying people in life and in social media disappear to the great filter.
AI agents can clean up the shitty place the Internet has become.
AI agents are personal butlers. Or internet condoms.
The way that works is this: on-device AI that can handle the task of routing and dispatching and filtering, which can then dispatch out to expensive cloud AI that would otherwise try to inject adds into the stream.
2) It runs your own agent
3) It has your best interest in mind
4) It's a broker between you and the "wild wild west internet"
That's cool, but is it "iPhone killer" cool? Maybe, but still unclear why. What's the mission statement of the device, to the OP's original point? It runs an agent, who cares?
Is the mission statement for this device basically "Use the internet without ads" -- if so, that's a pretty narrow market. People have learned to tolerate ads, I don't think people will throw away their iPhones for a better ad blocker.
What Apple showed seems quite useful. It is a shame they failed spectacularly at execution. Even the simplest things that should be answerable by an LLM and their data, which is what a lot of people want, should be a very low hanging fruit - so much utility without building a complete experience from scratch.
Why cant I just say 'do I have any notifications from a bank?' or 'show me emails that require my attention'. Those things are simple if done with a combination of multiple tools (e.g. feeding email content somewhere, asking it to classify, show the results), yet a three trillion dollar company, with dedicated hardware release just for this purpose, failed to achieve it.
I might be over simplifying things, but with infinite resources, they should be able to do better.
I don't think you are. The problem really is execution. I don’t need anything beyond what AI can already do—I just need an assistant that understands what I want and uses the right tools. I’m baffled that we still can’t get a reliable summary of emails, notifications, or appointments. If you give me the data in text format, I can paste it into ChatGPT/Gemini/Claude and have a much better dialogue than with any current phone assistant. Somehow, trillion-dollar companies still haven’t solved that.
"At what time does the first train from Stirling to Edinburgh arrive?" I don't need a page to the train time-tables (or god forbid, a vacation package to Scotland) — just the answer to my question.
When's the last time something of this magnitude actually occurred in real life? Myself and many of the other commenters you refer to have a hard time believing something like this is even possible in the current market—the huge megacorps are more risk-averse and incapable of innovation than ever before, and the scrappy startups seem to exist entirely to be acquired by the megacorps to raise their valuations.
The last time something even remotely like this happened was, what, the Oculus Rift? And that was far from a perfect product that perfectly solved every problem in the domain perfectly on the first try.
It's also not obvious to me that a concerted effort by Apple (unlike what we've seen so far, admittedly) wouldn't eventually be successful in converting the iPhone to something effectively indistinguishable from a platform designed from the ground up to be "AI-first".
Designing things from the ground up is hard by the way. It's not just the design itself; it's the ecosystems around them which are really hard to get going. Apple has the world's biggest flywheel in motion there already.
Everything so far that has been named X First has been marketing woo woo, and in practice only meant "we're thinking about this use case a little more than before". Such as mobile-first, and cloud-first.
In either case, sure, it's very possible that device hardware will change. But in what way is hard to say. Will the on-device chips be more powerful to support local inference? Sure.
> Apple is currently trying to jerry-rig AI into their existing product, the iPhone [...] is bound to be a complete failure in the end.
Yes, kind of. The problem with all existing platforms including web is that they're build in a way that is adversarial to interop. Apps are siloed, and the only possible birds-eyed view is the OS itself. But, GUIs are not built for machine interop. Vision models to navigate UI will be flaky at best for the foreseeable future (and forget about voice, it's an extra modality at best and is way too limited). On web frontend, it's the same story. On backend, the web has been adversarial for a long time, with fingerprinting, rate limiting, anti-scraping, paywalling etc, which has been supercharged in the last year or two.
Essentially, the products and systems we use every day are a poor fit for interop with AI, so I suspect we'll see two parallel futures: (1) interop and semantic GUIs being integrated into platforms, web and app ecosystems (this is what MCP is IIUC). This will fail for the same reasons as web 2.0 failed (the adversarial nature of tech business models - opening up APIs is not incentivized), not to mention the investment required to build a new OS and (2) vision models to do tasks on behalf of humans with some mediocre agent-loop-thing on top of existing hot garbage pool of already flaky apps and sites. This won't necessarily fail, but it will mean platform- and large data owners (Google, MS etc) will yet again end up on top, since they control the access to the birds-eye view (much like Siri or Google Assistant). It is also the most noisy, flaky and data-intensive surface area to use for interop, meaning the products will be slow, bloated and feel like bonzibuddy for years.
Doesn't mean AI won't transform businesses and white-collar work. It certainly already does. But, the AI selling point for consumers (current ability - not "future potential"), is kind of like how Google Search and Maps was a decade+ ago. Sure, it provides amazing utility, but most of the time you're looking at memes, playing games and watching TV shows. AI in those products is mostly a continuation of ongoing enshittification.
https://monocle.com/business/can-jony-ive-save-san-francisco...
Search "io" on Google right now and see what comes up...
I don't know about you, but neither of them comes up. Google I/O has always been something you have to search for including the "Google" part and this news is all about Jony Ive, not the nondescript company name.
Honestly with OpenAI buying Windsurf, and now this (whatever this is) I'd say that the company is in trouble and is now desperately attempting to buy it's way into relevancy. Either OpenAI wants to become a developer tools company (which can't be that profitable), or a consumer goods company. Trying to become the next Apple is really the only way to ever make the money they spend back.
OpenAI is a failing company. They made the first move, that will be their claim to fame. Sadly it turned out that what they are doing isn't that hard to replicate, just hard to profit from.
I’m done with iPhone once GPT releases their personal mobile AI device!
*Hmmm being downvoted the 500 million who use GPT daily won’t be excited to ditch iPhone for GPT phone? Love to hear why others think this isn’t a good idea?
(Just wild speculation — but one that would be on par for Corporate America over the past decade or two.)
Raise billions and billions under the guise of AGI coming tomorrow and they just become a too big to fail company gobbling up any competition.
You don't hear anyone touting AGI anymore do we?
Apart from, y'know, DeepMind - remember those guys? The ones with the SOTA models at the top of the leaderboards? The ones who just launched Veo3 and blew everyone away?
It feels like OpenAI has kinda jumped-the-shark at this stage. They don't seem to be especially competitive any more, and all the news coming out of them is tinkering at the edges or acquisitions that no-one really cares about.
When are they going to start competing on actual AI again?
Everyone was saying "oh man - Google had all this tech and they sat on it and just couldn't move forwards, then they blew their lead and OpenAI came a long and smoked them!"... Now it feels like it is OpenAI who are repeating that story, blowing their lead they got with the original ChatGPT while that upstart Google schools them in model development and vertical integration.
Interesting times. Very interesting times. C'mon OpenAI, move the SOTA forward!
After a lot of the drama and a ton of talent leaving all they seem to have left now is a pile of cash that they can spend eliminating competition. Meanwhile like others have rightly pointed out, talent at Google and even Mistral have been crushing it.
> Jony Ive, a chief architect of the iPhone, and his design firm are taking over creative and design control at OpenAI, where they will develop consumer devices and other projects that will shape the future look and feel of AI.
https://www.wsj.com/tech/ai/former-apple-design-guru-jony-iv...
> Ive won’t be joining OpenAI, and his design firm, LoveFrom, will continue to be independent, but they will “take over design for all of OpenAI, including its software,” in a deal valued at nearly $6.5 billion
https://www.theverge.com/news/671838/openai-jony-ive-ai-hard...
That’s a remarkably big product scope to own!
Are we talking about Devex workflows from docs on getting fed through Ive’s group?
The Verge must have this wrong, it doesn’t make sense and I don’t think Ive would be interested in maintaining design on ChatGPT’s web client.
Besides, only Anthropic beats the UX of ChatGPT. It would seem like a mistake to dismiss the authority of the folks who have built that product up.
The difference was management choosing to stick with a platform for long enough for network effects to kick in.
If Apple has any advantages compared to other big tech, it's an ability to look past next quarter's financials.
The first gen iPhone is not a smartphone by today's standards. No multitasking, no copy/paste, no centralized instant messaging, all things WebOS devices had on release.
Even the second generation of iPhones felt half baked by comparison.
Which just goes to illustrate my point, that they weren't technologically superior, just more committed.
Both Microsoft phones and WebOS have surviving communities today, and would have thriving communities if new devices were available.
Sadly, it takes more than two consecutive quarters to establish a platform.
This metric has very little to do with quality.
Pretty good for something that supposedly failed 12 years ago.
You spelled Steve Jobs wrong.
That is what he is world-class at. Not designing comprehensive product experiences or ideating new greenfield products (and definitely not designing app icons).
If IO or OpenAI also has a product visionary of the caliber to fully utilize Ive's singular industrial design talent, they'll rule the world. Otherwise, they're sinking billions into the next Humane Pin.
I don't know enough about any of this to weigh in on it, but when you take investor money, you aren't supposed to sit on it or do slow burn (at least not VC money), its meant to be gasoline, and you moonshot with it.
I seriously doubt it.
If anything because Apple let him go exactly when they were looking for a new hit product like the iPhone.
But also because how he handled the Mac the years before he was fired. All his big decisions were just bad. The butterfly keyboard, touchbar, USB-C only ports, etc. Heck even the 2013 Mac Pro (the trashcan) was an engineering failure. They could never upgrade it because, according to Craig Federighi, they got themselves into a thermal corner caused by the design of it[1].
Sometimes you gotta swing for that fence regardless of the outcome.
For this price, I'd figure something already exists.
A good QB will complete 65% of his passes
A good goaltender will stop 90% of shots
A good bowler will get a strike 95% of the time
“Hey Analogai, help me out here.”
“Ah I see what Chip Frumpkins, Director of Looking Relevant is saying. It’s basically that we need to throw a lot of paint at the wall to see what sticks. And if we fail, at least we’ve got a Jackson Pollock.”
Taking the raw engineering of the components and interfaces that defined the iPhone and making a system of it is design at its peak and almost art.
Taking a proven form factor like a laptop, not talking to users and making it worse is just a misstep. It wasn’t a complete disaster only because the bar is so low, the defective Apple laptop is still the best laptop in the market.
It could…
- interface with AI Agents( businesses, friends & family’s agents, etc) to get things done & used as a knowledgeable.
Once u pick up the device it’s like a FaceTime call with your AI agent/friend in which u can skin to look however u want (a deceased loved one ..tho that might be too out there).
- It visually surfs the Web with you.. making u not open a web browser as much
- take the best selfies of u…gets you to the best lighting.
Overall excited to see their vision and leave/drop Apple’s now boring iPhone for a GPT phone or personal mobile AI device. I think a phone form factor would be best, but we’ll see.
Apple certainly aren't going to do it, so who better than Ive?
Google own this space - pixel phones already do pretty much all of this, and they have the best models and the most users too. No built in agentic capabilities yet, but I am sure that is just a month or two away (see project mariner).
If you've not tried the pixel photo ai features already, you may be surprised. Things like changing lighting, removing people from the shot, auto-stitching people into a group photo, composites group photos so you get one photo where everyone is smiling and looking at the camera at the same time even if that never happened etc, text-editing photos etc. Gemini live is like a facetime call without the 3d avatar but we've seen they can do it with Veo3 already if they wanted.
This is all reality today already in the hands of billions of Google users, so OpenAI have a bit of a hill to climb: OpenAI would need to not only catch up with Google (both in AI space which they seem incapable of doing right now but also in product too) and surpass them.
Google are totally integrated in this space - the device, the software, the AI models, the infrastructure, the data, the sites/apps people use (search, Gmail, maps, YouTube, docs, ...) and also the billions of users.
I doubt OpenAI can really make a dent here. I suspect any OpenAI-Phone will be quietly discontinued like the Facebook phone
Not sure smart glasses will be big but I lean on indeed they will be just not replace our pocket mobile devices (can’t take selfies with glasses).
I'm shocked that nobody has reproduced Google Glass. It was great even back then and it didn't take much to understand its usefulness.
No company that I am aware of has produced anything like it since
You pick up your GPT mobile device and the UX is a FaceTime call with a real life looking AI person who does everything for you(you can skin it to look like anyone including a deceased relative.. they live on & help you thru life). You rarely will need to go to the web ... your AI friend / agent / assistant could bring up the web right within the FaceTime call yet visual the data you seek. They can take the best selfies of you ... direct to the best lighting within your living space at the time. As noted your AI friend will interface with AI Agents of businesses, your friends & family to schedule things and used as a knowledge base (want to know your cousin birthday ask your agent and if you cousin shares that with family members your agent will tell you via their agent).
You are saying Google just announced a H.E.R. phone or personal AI mobile device where the AI is the focus (apps and the web take a backseat)? As Im describing above?
The fact the you took your laptop out in the field or to a couch in some barn like a filthy animal, corrupting perfection with dust and grease, rendering the keyboard useless is on you. It is a reflection of your own animal nature.
Spot on about the rest, though.
I still chuckle when I see a new laptop with USB-A ports
But over 7 years of using them, I've come to resent some of their differences with past USB connectors. Very small, insecure friction grip, reversible, more delicate.
Also it seems that device designers think that a newer generation of USB needs fewer ports? My Lenovo ThinkPad had 2x USB-A and 2x USB-C in 2018. Now I've got a Pixel with 1x USB-C and a Chromebook with 1x USB-A and 2x USB-C; on each of those devices you need one port for charging. So if USB is more versatile and compatible than ever, why am I not allowed to plug in all my stuff at once?
All they have to do is convince investors of that before the next round and they get a net return on him.
It's like an excellent captain who never was a mariner, some useless theoretical excellence.
You're off by an order of magnitude here.
[0] https://www.bloomberg.com/news/articles/2025-05-21/softbank-...
https://www.marketwatch.com/story/heres-why-openai-is-buying...
“As part of the deal, OpenAI is paying $5 billion in equity for io. The balance of the nearly $6.5 billion stems from a partnership reached in the fourth quarter of last year that involved OpenAI acquiring a 23% stake in io.”
Sure, if you want to get into theoretical finance, OpenAI could have sold these new shares for cash, so technically there's no difference, but OpenAI is only spending opportunity cost cash, rather than fiat.
OpenAI's fiat likely still goes to the things you'd expect, like training models and paying for inference.
Developers now spend excessive time crafting prompts and managing AI generated pull requests :-) tasks that a simple email to a junior coder could have handled efficiently. We need a study that shows the lost productivity.
When CEOs aggressively promote such tech solutions, it signals we're deep into bubble territory:
“If you go forward 24 months from now, or some amount of time — I can’t exactly predict where it is — it’s possible that most developers are not coding.”
- Matt Garman – CEO of Amazon Web Services (AWS) - June 2024
"There will be no programmers in five years" - Stability AI CEO Emad Mostaque - 2023
“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software.” - Satya Nadella – CEO of Microsoft - April 2025
“Coding is dead.” - Jensen Huang CEO, NVIDIA - Feb 2024
"This is the year (2025) that AI becomes better than humans at programming for ever..." - OpenAI's CPO Kevin Weil - March 2025
“Probably in 2025, we at Meta are going to have an AI that can effectively function as a mid-level engineer that can write code." - Mark Zuckerberg - Jan 2025
"90% of code will be written by AI in the next 3 months" - Dario Amodei - Anthropic CEO - March 2025https://gizmodo.com/klarna-hiring-back-human-help-after-goin...
You fire the copywriter, but you still have to pay for the advertising (google clicks, tv spots etc).
So if you have $100,000 ad campaign and you use AI instead of paying $10,000 to the copywriter, you probably have a higher chance of wasting the $90,0000 ad spend.
So it comes down to, the AI probably makes a skilled copywriter better. But you can't get rid of him
Correct. This is how most bubbles are kept up as they are all exposed in the hype cycle.
You will not hear about the mistakes [0] [1] [2] it makes when AI gets it wrong or hallucinates and all the AI boosters can do will be "it only gets better" and promise that we will soon be operating airplanes without humans. [3]
Surely you would feel safer if you or your family boarded a plane that was fully operated by ChatGPT, because it is somewhat closer to "AGI"?
I really don't think so.
[0] https://techcrunch.com/2025/04/29/openai-explains-why-chatgp...
[1] https://www.theverge.com/news/668315/anthropic-claude-legal-...
[2] https://techcrunch.com/2025/05/15/xai-blames-groks-obsession...
[3] https://www.flyingmag.com/replacing-airline-pilots-with-ai/
-- James Gosling - May 2025
Businesses will wake up when it is too late and the damage to the engineering side of their products is already done. Or perhaps won't wake up at all, and somehow (to their management levels) inexplicably fail.
Add these purchases, and it seems like they are extremely desperate.
It really wont be that long until we see some ~GPT4 llm embedded locally in a chip on the next iPhone release...
Those demand are better fullfiled at the cloud.
Something bigger than a smartphone usually.
So small mobile optimized LLMs will come, or are rather already there - but if they would manage to make the big GPT4 modell run on an iPhone, that would be a pretty big thing in itself, way larger than GPT5.
That doesn't look to be true in general. AlphaGoZero didn't learn off smarter humans or smarter AI's (at all - it only trained against itself), yet it became better at playing some games than any existing AI or human.
To me it looks like the same thing has happened for LLM's in the one area they are truly good at: natural language processing. Admittedly they only learned to mimic human language by begin fed lots of human language, but they look at least as good at parsing and writing as any human now, and much, much faster at it. And admittedly they have plateaued at natural language processing. But that's not because of any inherent limitation in the level of intelligence an AI can achieve. It's because unlike playing Go there is a natural limit on good how you can get at mimicking anything, which is "indistinguishable".
The other things LLM's seem to be good at a lossy compression of all the text they have been trained on. I was floored when I ran a 16GB locally, and it could tell me things about my childhood town (pop: under 1000, miles away from anywhere). It didn't know a lot, but there isn't a lot out there about it on the internet, and it still astounds me it could compress the major points of everything it read on the internet down to 16GB. The information it regurgitated was sometimes wrong of course, but then you only expect to get a overview of a scene from a highly compressed JPEG. The details will be blurry or downright misleading.
What they are attempting to tack onto that is connecting the facts the LLM knows into a chain of thought. LLM aren't very good at that, and the improvements over the past few years look to be marginal, yet that is what is being hyped with the current models.
None of that detracts from your main point, which I think boils down to the rapid advancements in proprietary models have stalled. Their open source competitors aren't far behind, and if they have really stalled open source will catch up.
But that's only true for the natural language processing side. The shear compute required to keep a model up to date with the latest information in the internet means the model with the most resources behind it will regurgitate the most accurate information about what's on the internet today. Open source will always lose that race.
Brands have value. If someone has logged into ChatGPT for two years daily, they have built a habit. That habit certainly can be disrupted, but there's a level of inertia and barrier -- something else has to be 10x better and not just 2x better.
When DeepSeek came out, I tried it out but didn't fundamentally switch my habit. OpenAI + Claude + Gemini instead caught up.
Which of these does Jony Ive's company have?
They're not acquiring Jony or Jony's design firm. They're acquiring the remaining portion of a joint venture. You could even say that LoveFrom is divesting from the joint venture.
They take zero risk while attacking user fatigue (people just get bored of stuff). The current leaders take all the risk following OpenAI because everyone will complain about the changes no matter what they do, and just come up with a reason to switch. This is a human phenomenon that is truly fucked up, the same as when a partner in a relationship is ready to move on no matter what you do.
Do you have a source? I ask because I read the opposite.
Unlike most successful startups, OpenAI is not faced with the possibility that the giants (Apple, Google, Microsoft) decide to look their way, but the reality that these are their real competitors and that the stakes are existential for many of them (trends indicating a shift away from search etc). The most likely outcome remains that one if not all of the giants eventually manage to produce a halfway-decent product experience that reduces OpenAI to a B2B player.
Yes because the only way to get access to intelligence is via ChatGPT which continues to lie and hallucinate on a regular basis.
Definitely can't get it via the web, books, videos etc.
That makes the presumption that we are currently in a `winner-takes-all` scenario, and I'm not convinced that that is the case.
I'm not sure what the criteria is for a winner-takes-all scenario, but it is not at all evident to me that there is one now, or ever will be.
There is, as everyone says, no actual moat here: Google search had a moat, Windows Desktop had a moat, Apple phones had (and still have) a moat. LLM output currently has no moat, not even performance (both speed and accuracy) because the productivity difference between no-LLM and poor-LLM is about 100x the difference between poor-LLM and good-LLM.
My prediction is that the price of LLM usage will slowly but consistently climb until it reaches the floor on LLM cost-to-suppliers. Right now we are all (myself included) being subsidised by VC money. When the supplier has to actually turn a profit, there's no moat that they can use to keep out newcomers, because the newcomers need only a fraction of the money spent by (for example OpenAI) in order to compete.
Maybe Google has a moat, in that they have everything in-house, from the user-facing product to the tensor-processing hardware? That's as close to a moat that I can think off.
It makes more sense to believe that scaling has hit the wall on available text data to train on, and that to continue scaling, along with whatever emergent properties arise they need much more data than exists as text.
There are orders of magnitude more data as video, audio, and images and this is what they intend to use to continue scaling.
I'm open to being wrong, very open, but I need to see evidence. Hard evidence.
The models will not be a moat, but the products can be. More specifically "sticky" products / killer apps like ChatGPT, and whatever forthcoming products this acquisition of Jony Ive's company may lead to.
Windsurf acquisition may be explained in part by the same logic of owning a strong and sticky product, as well as a good source of data for training.
To play in the same league as Google and Microsoft you have to be big. So they need to increase enterprise value to be taken seriously.
That's what investors expect them to do.
The only other option is to close it down, as OpenAI would quickly become obsolete if they can no longer produce frontier models.
As for the moat, it's not something you can just conjure, right? Perhaps the whole point of these acquisition is to create a moat, but only time will tell if that worked.
Hacker News: "Man these OpenAI folks are idiots."
OpenAI absolutely should be getting in the hardware game; Ive is a mix of status acquisition and unicorn, and is not the only person/team/company you'd need to make a quality hardware product. But on balance I'd pay 2% of every company I ever had any financial engagement with to get Mr. Ive doing its design. I mean srsly.
My problem is that Altman is a very smart idiot. He already admitted that OpenAI have absolutely no idea how to make money. Apparently they've now given up on the idea of asking ChatGPT how to make money. Their "AI" not going to develop fast enough, if ever. So now they are just buying up stuff left and right? It might be part of some coherent plan, but if it is, no one else is seeing it.
Altman is smart enough to see that things are not working out and that he's going to run out of money and investor patience. He might also be smart enough to see that if OpenAI fails, so will 80 - 90% of his competitors, not sure if he care though. He needs OpenAI to survive, but he's not that kind of smart, and honestly I'm not sure anyone is.
LLMs have blown through every major test people have put in front of them invariably beating estimates as to how long it would take them. Pull up Dwarkesh’s podcast about ARC wherein the creator of ARC proposes it could likely never be super-human with current architectures, about 3 months before o3 provably became superhuman on ARC, spurring the creation of a new “better” (and it is better!) test.
To my outside eyes the OpenAI plan is simple: get too big to fail and be ready to navigate changing investor appetite. Plus maintain technical leadership if possible. And build an enduring consumer brand. Simple but hard. You will note that (as far as I know) they have invested in zero direct physical infrastructure, preferring compute deals with companies like Microsoft and Coreweave.
To my eyes their risk point would be: massive loss in quality/cost to a competitor (Gemini 2.5 pro underscores that Google is a real contender here, and has like six generations of custom chips that make their economics different), or somehow investors remain bullish on AI but bearish on OpenAI to the extent they can finance a legitimate competitor.
If investors lose interest generally, we will enter a new era of higher-cost inference and comparatively less demand. This is the intent behind doing compute contracts rather than owning data centers — a contract likely shifts most of this risk right out onto data center providers; OpenAI can just pay for less compute time. I don’t think this is a ‘death’ scenario for them, because this will be a general loss of interest and therefore all AI companies will stop being able to give away free inference. OAI might contract (probably would) in this world. They might slow down on new model training. (Probably would). But, so would everyone else.
Another way to say it - they’re spending single digit billions of dollars on training and research right now. Think of that as creating a strategic asset, and ALSO customer acquisition cost (e.g. image creation this year — new, better models = more paying customers).
Against a 200mm customer base, would you spend $20-50 to acquire a customer that pays $20/month? Their CAC is low right now. Really low!
This is why I’d propose the major risk is that they get singled out of the herd as ‘non-investable’ vis-a-vis other AI companies. To my eyes they don’t look to be at risk of this right now; if they somehow got there, this would be a real problem - it would lead to the scenario I think you’re imagining — they’d have no money to give away inference / train models, but competitors would.
So, you have to ask, are they sufficiently large, popular, technology leaders, embedded as a strategic US asset in the military industrial complex to avoid that fate? My outside assessment is: definitely.
And before that he was responsible for some of the worst hardware decisions in Apple history.
They want us to believe they are building hardware. Okay, they bought a brand. Now what?
Apple spent decades building its hardware expertise on every level from industrial design to chips. LoveFrom has Ive ... and?
If you're designing a piece of consumer hardware, then having what the general public consider the #1 designer in the world on board is golden.
Am I missing something here! Apart from Windsurf, what else did they acquire?
actually sama was kicked out as CEO of YC because he was meddling too much openai-sub companies to prop them up and get a big chunk or smt.
he also stole reddit lol, sorry i mean -acquired- https://news.ycombinator.com/item?id=41657001
it's a big club and you ain't in it [1]
So imagine Johny Ives to be worth a couple of cruise ships tied up outside 2 hulking casinos.
Can't think of any maybe im wrong
he must be buying the name or something or just the brains idk
shame ive
amnemonemomne
As a CMO once said to me: you don't want to hire me, you want me 5 years ago, so hire X
idk, I expected a bit more risk-taking and creativity given the price and exclusivity.
In hindsight, it might be a cheap acquihire.
These types of puffery acquisitions, with a former “legend”, announced with such gusto, have never materialized into anything.
You’re not gonna get breakthrough products like this. Breakthrough products just appear unexpectedly, they’re not announced a year or two ahead of time.
What an extremely weird (and egotistical) thing to say if you're in Altman's position
"The Social Network" (2010) seems so innocent now.
I'm saying that as a new partner to someone, it's extremely weird to say that your old dead partner would be extremely proud you teamed up with me. If I were to marry a woman who lost her husband, it would be extremely weird and egotistical for me to tell people that her dead ex-husband would be "damn proud" that she married me.
Perfect example. That is exactly what it feels like. What a nasty thing for him to even think, and he goes and says it publicly.
It wouldn't be that weird if Ive had said so himself.
It would still be marketing, though.
> Five years to the week after he walked away from the top job designing the iPhone [1]
Sounds to me like OpenAI is going to make it's own consumer device. Maybe designed by the AI itself. The AI Is choosing it's own body?
[1]: https://archive.is/yixNr#selection-615.0-615.81 "After Apple, Jony Ive Is Building an Empire of His Own" - NYT
Ives + Altman is perceived as a viable successor to the Ives + Jobs partnership that made Apple successful.
Apple is weak and doesn't seem capable of innovating anymore, nor do they seem to understand how to build AI into products.
There's an opportunity to build an Apple-sized hardware wearables company with AI at its core, just as Altman built ChatGPT and disrupted the Google-sized search.
"Apple-sized" more than justifies a 5B valuation.
I just don't exactly see how that is done by hiring a bunch of designers to a company whose current offering is a chatbot & API interface.
I don't see how Altman is going to disrupt Apple with just Ive and a company no one's heard of before.
This acquisition (and the Windsurf acquisition) are all-stock deals, which have the added benefit of reducing the control the nonprofit entity has over the for profit OpenAI entity.
How do you extract the for profit entity out of the hands of a nonprofit? - Step 1: you have close friends or partners at a company - with no product, users, or revenue - valued at 6.5billion. - Step 2: you acquire that entity, valuing it unreasonably high so that the nonprofit’s stake is diluted. - And now control of OpenAI (the PBC) is in the hands of for profit entities.
Did the non-profit buy io using shares of the for-profit that it owns? Or did the for-profit buy io using its own shares?
Do you think multi-billion-user products can exist without "slop"? What do you think the average person wants to consume? The equivalent of salad? Have you met the average person?
I think people have fundamental misconceptions of the average person's desire.
Insane take. Reddit hosts deep threaded discussions on almost any topic imaginable. In its prime it was the best forum on the internet. There’s a reason people commonly add “reddit” to the end of their search queries.
Unfortunately it feels like the community has gotten much dumber after they banned third party apps and restricted API access. It’s also lost almost all of its Aaron Swartz style hacktivist culture.
Reddit, in its prime, was incredible and beloved by almost everyone I know (most of which are far outside the HN sphere)
I miss the old skool php web forums.
Do you have any tips on how to specifically search for these forums? Without just googling for topics and browsing hours to find some. When I think about it, just googling/searching might be the only way.
You can spew ads and shit wherever they’ll let you, doesn’t enrich the environment by default.
I get what you mean, but I’m still unconvinced of Reddit as a meaningful platform.
I think people are more critical in this discussion though, so that an apparent consensus may be interpreted by the user as the thread being bot-infested rather than there being a consensus. Thus it may be harder to get a result there, and the really interesting people that you may want to affect might actually be immune because they approach the medium as critically as it should be.
/r/Sweden bans people, but is less astroturfed and you can still have real discussion there, except when the drug-liberals (for Americans speakers, think drug-libertarians) and other goons come out of their hiding holes.
I don't think either subreddit cares about Israel stuff at all, and they certainly don't care about what any Swedish government thinks. In both, everyone gets to have their say, whether he's Israeli public diplomacy or Qatari public diplomacy, although bots will of course downvote, and Reddit itself will sometimes remove comments.
Most of reddit doesn't read HN, and there 100s of millions of people on reddit, so your perspective seems a bit narrow.
They said the same thing about Quora and 3d TV.
That being said, TikTok and Instagram matter. Reddit probably matters more because it's so easy for motivated people and corporations to manipulate discussions on it; it's even weaker than Wikipedia.
50x as many people read Reddit than post on Reddit, and 10x as many people as read Reddit have gotten their opinions indirectly from people passing on stuff they (can't remember that they) saw on Reddit (but think they learned somewhere legitimate.)
Reddit isn’t for me any longer, when they break old.reddit.com I’m done with it, I go weeks without commenting as it is.
A significant amount of the current content is literally bots posting old threads! Whether those bots are run by reddit itself or unaffiliated parties I don't know, but they are there, on most threads, including some threads that are ONLY bots reposting a 3 year old thread that did well, verbatim.
My tinfoil hat theory is that all the "Explain this (very obvious) joke to me" subreddits are trying to create training data for some AI and that a significant amount of the content that makes it to the front page is designed to elicit "Good Training Data" for whatever AI company they sold the rights to.
The key is to not mistake your social circle with "normal".
Based on their other behavior, it wouldn't surprise me if Reddit both used crawler hits to pump up numbers while decrying AI bots and doing things that broke long-standing community tooling and apps....
Only if you go there for rage bait content.
Small subs are better than ever. And no Lemmy is not an alternative.
the funny thing is the only indication that this happened was keybase alerting me that my proof was gone.
I can login and use reddit as usual, but nothing I do has any effect. It's like I am in a sandbox. Try to view my profile publicly and it does not exist.
But what is keeping you from making a new account and rejoining the same subreddits, except perhaps losing a hundred million magic internet points?
Which is sad - I've been using Lemmy exclusively for 5ish years now and the smaller communities haven't really taken root. Reddit still controls the long tail of internet discourse
Eschew flamebait. Avoid generic tangents.
Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You can't "no true scotsreddit" your way out of this issue because it's an overarching issue with the platform itself. Even 4chan has more better protection against influence campaigns, it's pathetic how Reddit's own administration lets itself be defined by it's lowest-common-denominator.
Depends on the community we're talking about here but I found Lemmy to be a great alternative for tech communities.
The app is not impartial in the content it chooses to push. I got identified as a target for very specific content and in the context of this discussion, it's the polar opposite of what reddit used to be.
Relevant thread where Sam acknowledges the plan.
https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
_vomit face_
Such an insufferable response.
From here: https://openai.com/index/evolving-our-structure/
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission. And as the PBC grows, the nonprofit’s resources will grow, so it can do even more. We’re excited to soon get recommendations from our nonprofit commission on how we can help make sure AI benefits everyone—not just a few. Their ideas will focus on how our nonprofit work can support a more democratic AI future, and have real impact in areas like health, education, public services, and scientific discovery.
The previous structure is here: https://openai.com/our-structure/
From what I understand reading Mat Levine explanation of the topic, the non-profit controls the board and has supervoting rights, so it cannot be diluted to be outed.
https://www.bloomberg.com/opinion/newsletters/2025-05-06/ope...
Some take the form of different stock classes, with some classes having voting rights, and others no vote at all; other schemes are stock with supervoting rights.
Nope [1].
Once they have collected 'enough' faces to use on their AI, they could possibly pull the plug or keep it as a social experiment.
I was thinking, there is no way Russia or China will allow them to operate in their countries, and (combined) they got 1.5bn people.
I can also see them trying that to Pakistan, Afghanistan, and other autocratic ..stan places, where the local dictator would only allow this if they got to use the data for their own nefarious purposes.
Most people calling it a scam don't know much about it.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
Transacting with your eyeball? Directly out of the Book of Revelations!
[1] I took a strong interest in biochemistry in college and I'm no longer religious.
Initially I thought it's a bloody stupid idea, however at this stage I reckon we need it or a lot of boomers are going to be ones hotted into singing away all their wealth away.
I predict that Worldcoin will get it done first, and will be more dependable than most countries. But it could turn out otherwise. In the end, services that need humanity verification will have multiple provider options and the market will decide.
Government solutions will be opt-out, and only in the most tedious way: Leaving the country, burning your passport and becoming a stateless person. Not recommended.
If there ever was a moat, government has it.
I would trust a blockchain more than my government. My government has clearly been shown to be vulnerable to a < 51% attack. Blockchains don't change every 4 years and decide habeas corpus no longer applies to me because my skin is the wrong color either.
I'll add that conventional finance wisdom says that you should only buy companies using stock when you believe your stock is overvalued. That way you get more bang for your buck than cash or undervalued stock.
I agree with your analysis, but it's hilarious that it's now top-voted, when the sentiment was so negative when the board saw the same thing coming ages ago.
Their remaining moat is basically captured developer mindshare/inertia. That is important, but given how easy it is to swap out back-end models, and how good other models are - ultimately cost is going to win. And it's currently a race to the bottom in pricing.
The ideas seemed important and useful. They were optimistic and hopeful. They were inspiring. They made everyone smile. They reminded us of a time when we celebrated human achievement, grateful for new tools that helped us learn, explore and create.
The Optimist. By Keach Hagey Empire of AI. By Karen Hao
The reviews are positive for both books. The column itself is titled "Sam Altman is a visionary with a trustworthiness problem" and shows a few reasons people have had some problems with his behaviour. One quote from the article is :
"Ms Livingston fired him, but as Ms Hagey recounts, he left chaos in his wake. Not only was he overseen by a non-functioning board, he had also used YC equity to help lure people to OpenAI. She says some YC partners saw a potential conflict of interest, “or at least an unseemly leveraging of the YC brand for Altman’s personal projects”."
Sam Altman is a visionary with a trustworthiness problem https://archive.is/oANfs
"Two books tell a similar tale about OpenAI. It is worrying"
> After visiting an Orb, a biometric verification device, you will receive a World ID.
> For each unique human who verifies their World ID with your Orb, you will earn WLD tokens.
> World Operators are independent local business owners or entrepreneurs who help make World available in their local communities.
> Make a USD $100 deposit to secure your priority for an Orb.
Really going for a full score on the scam checklist.
And everyone cringed.
Or reading them in the UI with only one word at a time on screen.
That’s why 80% or what have you of information is percieved via eyes.
As in bearish on openai if they don't offer cheaper 10m context soonish. Google will.
If raw AI power is the key, Google seems to be in pole position form here on out. They can make their own TPUs, have their own data center. No need to "Stargate" with Oracle and Softbank in tow. Google also has Android, YouTube and G-Suite.
However, OpenAI has been going down the product route for a few years now. After a spout of high-profile research exits it is clear Altman has purged the ranks and can now focus on product development.
So if product is a sufficient USP, and if Altman can deliver a better product, they still have a chance. I guess that is where Ive comes into picture. And Google is notoriously bad at product that is internally developed.
And historically, that's definitely been true. I do think they're doing well on the AI front at the moment, but who knows if that will continue.
What we need is not "long context", we need memory: ability for LLM to address datasets of arbitrary size.
RAG has bad reputation but there's a myriad of different ways for doing RAG. Say, "agentic" tool calls which fetch specific data is essentially a form of RAG. But it's cool because it's not called RAG, right?
Anyway, this definitely requires some innovation, but I doubt "longer context" is exactly what we need.
From my experience, pretty much all coding tools have their quirks.
I generally agree that Gemini is a very strong model, but I don't think we can at this point we can conclusively say Google would win because of the long context.
It's too much to extrapolate from a single case. E.g. I see Gemini struggling with editing files a bit more than other models, but I'd say it's just growing pains rather than something fundamental
Inflation-adjusted, this acquisition is worth 4x that for… vibes from a guy who led a famous team a long time ago?
In this case as I understand it, no money is being paid - it's a stock deal. For stock that isn't yet publicly trading, thus 'priced' pretty speculatively. So it is pretty abstract, and unreal.
Exactly - money is hype. Hype is money. Which goes into the foundation of money to begin with: confidence and faith. And the levers of power which move that confidence and faith are decreasing in number and increasing in length/leverage.
Ship something, then you can create a video like that, not before.
apparently it worked on some people: https://daringfireball.net/linked/2025/05/21/sam-and-jony-io
Indeed, "greatest in the world" lack of pretension
Basically what happened was I wrote a post, and some guy responded to me with a firehose of personal insults. I called him a troll in reply, and within 30 seconds of posting said reply, I was permabanned as a first offense, without any possibility of appeal.
Mods be powertrippin over there.
/r/news locked/suppressed [0] as 'Politics'.
I sent a Message to modmail:
Me: Calling this 'politics' makes me ask who's on the an alphabet payroll... Just saying.
Reply from modmail: This message makes us think you haven't bothered to read the rules... Just saying.
Then I was muted from /r/new modmail for 28 days, while also being perma-banned from /r/news.
Months layer, I had left a normal comment on a different thread with a 'mobile' secondary account on /r/news, and found both my desktop and mobile accounts locked for 7 days because the /r/news comment was considered 'ban evasion'. Despite having otherwise commented on /r/news from my mobile account in the meantime with no repercussions.
It was within the subreddit rules and reddit TOC to do all of this, I acknowledge, at the same time it's almost like Reddit is hitting that vibe of StackOverflow from a few years ago where mods can just power trip and make the place less useful for everyone...
[0] https://old.reddit.com/r/news/comments/1es7sbp/us_considers_...
Another example:
Apple subreddit allows developers to self-promote their apps on Sundays. I posted an app of mine. Mods removed it and banned me for 100 days from the subreddit because I had 4 comments within the last month and not 5. This is despite me having lots more comments and posts (multiple posts/comments over 7000 points) over 7 years and in last 2 months instead of last 1 month.
We detached this subthread from https://news.ycombinator.com/item?id=44055918 and marked it off topic.
I am sure this aligns with the non-profit part of OpenAI whose board allegedly has influence of where the company is heading.
This industry is amazing.
What do you mean?
> Sir Jony Ive will “assume deep design and creative responsibilities” to build new products for OpenAI
So he's pretty much gifting a nice amount of OpenAI stock to his friend, and also handing over all design responsibilities.
Nice.
Ideally then a merger would be most formidable if it was greater than the sum of its parts.
Things are rarely ideal and it can also be quite formidable when it's only equal to the sum of its parts.
Or even "less-than-equal" if it's really affordable, depending on the combined resources, or even one-sided resources.
None of this really means any "merging" of tasks or facilities, or combining business structures, etc.
There's such a huge amount of options and possibilities just from different approaches and levels of equity and cash consideration.
A merger can completely engulf the smaller company, with a logical transition plan to deprecate its separate identity (sometimes for better or worse) and assimilating it into part of an established structured monolith. Or even creating a new monolith altogether in combination. OTOH it can be done so there are virtually no changes to either org up and down almost their entire separate structures, with only a handful of operators at the top adjusting to equally influential changes which are purely in financial elements alone.
This is not completely unlike Ive getting paid in advance to work his magic. Except it looks like he was up to new tricks starting a couple years ago, with sama's encouragement. Or is that more like deferred compensation? If it's going to be insanely great there's probably so much work to do, there wouldn't even be time to spend an extra billion dollars or so, plus that's a lot of money so it would be best to have a better idea if it's really worth spending before you go whole hog too.
A couple years ago it was probably a good bet that something big could come out of a collaboration. And it could really be worth money someday. And that idea is now worth more than it was back then. And as good fortune would have it, sama was on a trajectory to be better able to afford it now for quite a high price compared to what it was worth then, and he couldn't have justified it yet back then anyway. I would think Ive has made progress in the last couple years (without spending exorbitant amounts) that impressed sama more than ever too. Imagine what he could do if he had exorbitant amounts :) I guess we'll find out.
Looks like Ive will have quite a bit of resources to finalize designs and ramp up, plus billions more in equity to fall back on if that's not enough. This may just it be what it takes to launch a mass-adoption physical product without having undue pressure to prematurely issue something with any type of shoddiness.
Buy "nobody company" from Jony Ive (using stock)
This incessant need to associate themselves with highly known individuals and over the top announcements reminds me of "Theranos" and infamous con artist, Elizabeth Holmes.
Sam Altman sure knows how to sell.
I wonder how much longer he can keep the con going, even though many of the original founders have left. Maybe 2-3 more years of this dog and pony show before it all comes crashing down in the most spectacular way.
"AI utilitarians are in a suicide pact with the U.S. economy"
(attribution: @bencollins.bsky.social)
After Jobs passed he never produced anything of any value, he almost destroyed Macbooks.
If a non-Apple company launched an identical watch at the same time it would have gone nowhere.
What I always found crazy was that Ive seemed to just take design ideas from then Windows 8 and Windows Phone more than trying to create his own thing. It showed that he had no original ideas of his own; even just iteratively improving on iOS 6 would've been better.
On Mac hardware, he definitely needed some sort of editor to stay his hand post Jobs. The era of crappy butterfly Macbook keyboards is still something I remember that was clearly his responsibility, driven by obsession on thinness and it seemed for a while that Apple was in denial about the issue.
Still, the Apple Watch is a definite hit for Apple now and it's clearly his baby, so his legacy isn't all bad.
I can only explain it with them recognizing that their strongest asset is brand mindshare. This is actually really bad for their outlook as AI model pioneers.
Eventually it was going to be the case that AI will spread around. It can't be contained, it's too easy to distill and hence copy from output.
But I admit I didn't expect it to happen that soon. Also I respect Jony Ive, but expect his "AI devices" to all fail in the market. He's an idealist. He needs counterbalance that he currently lacks.
It remains to be seen whether Sam Altman / OpenAI in general will be a good editor
Nobody says what kind of hardware. A wearable is the likely bet. Maybe a home robot, but that's a few years out.
..
OpenAI said it already owns a 23% stake in io from a prior collaborative agreement signed late last year. It says it will now pay $5 billion in equity for the acquisition.
..
OpenAI said Ive will not become an OpenAI employee and LoveFrom will remain independent but “will assume deep design and creative responsibilities across OpenAI and io.”
https://apnews.com/article/jony-ive-openai-chatgpt-52c72786e...
We are after all literally talking about the guy who called openai "open"ai? Even your kid knows that none of the words used need make any sense.
https://en.wikipedia.org/wiki/Ron_Johnson_(businessman)
I am wondering to what extent 'key man' insurance is needed. That's a big purchase to be riding on one man essentially (yes they are getting engineers and others but Jony seems to be the big ticket item for the purchase).
He never had any success post-Apple like you say, but it wasn't because there wasn't any "insurance man". For me, I see it as a guy who found something worked smashingly, so he just assumed it would work everywhere else.
The stuff he pulled at JC Penny is a master class in what NOT to do in business:
After his success at Apple and Target, Johnson was hired as chief executive officer by JCPenney in November 2011, succeeding Mike Ullman, who had been CEO for the preceding seven years. Ullman then was chairman of the board of directors, but was relieved of his duties in January 2013. Bill Ackman, a JCPenney board member and head of hedge fund Pershing Square supported bringing in Johnson to shake up the store's stodgy image and attract new customers. Johnson was given $52.7 million when he joined JCPenney, and he made a $50 million personal investment in the company. After being hired, Johnson tapped Michael Kramer, an Apple Store veteran, as chief operating officer while firing many existing JCPenney executives.[11][12][13]
When Johnson announced his transformation vision in late January 2012, JCPenney's stock rose 24 percent to $43.[14] Johnson's actual execution, however, was described as "one of the most aggressively unsuccessful tenures in retail history". While his rebranding effort was ambitious, he was said to have "had no idea about allocating and conserving resources and core customers. He made promises neither his stores nor his cash flows would allow him to keep". Similar to what he had done at Apple, Johnson did not consider a staged roll-out, instead he "immediately rejected everything existing customers believed about the chain and stuffed it in their faces" with the first major TV ad campaign under his watch. Johnson defended his strategy, saying that "testing would have been impossible because the company needed quick results and that if he hadn’t taken a strong stance against discounting, he would not have been able to get new, stylish brands on board."[12][14]
Many of the initiatives that were successful at the Apple Stores, for instance the "thought that people would show up in stores because they were fun places to hang out, and that they would buy things listed at full-but-fair price" did not work for the JCPenney brand and ended up alienating its customers who were used to heavy discounting. By eliminating the thrill of pursuing markdowns, the "fair and square every day" pricing strategy disenfranchised JCPenney's traditional customer base.[15] Johnson himself was said "to have a disdain for JCPenney’s traditional customer base." When shoppers were not reacting positively to the disappearance of coupons and sales, Johnson did not blame the new policies. Instead, he offered the assessment that customers needed to be "educated" as to how the new pricing strategy worked. He also likened the coupons beloved by so many core shoppers as drugs that customers needed to be weaned off."[11][12][13] While head of JCPenney, Johnson continued to live in California and commuted to work in Plano, Texas by private jet several days a week.[16]
Throughout 2012, sales continued to sag dramatically. In the fourth quarter of the 2012 fiscal year, same-store sales dropped 32%, which led some to call it "the worst quarter in retail history."[17] On April 8, 2013, he was fired as the CEO of JCPenney and replaced by his predecessor, Mike Ullman.[18][19]
For comparison, during that same time period, the retail successes were the designer collaborations, like Versace x H&M or Target x Rodarte, etc…
All Johnson had to do was bring in some designer collaborations…
Ron Johnson's job where he had the most success was where he was selling fundamentally desirable and great products. I think you would have to be pretty shitty at retail to not do a good job selling iPods and iPhones. His subsequent 2 endeavors, JC Penney and Enjoy, were complete flops. It turns out selling middle-market goods is just really f'ing hard.
Ive, on the other hand, I think is pretty universally recognized as a design genius who was directly responsible for the designs of some of the most important consumer products of the past few decades. Yes, it does seem like Jobs was a critical editor that tempered the worst of Ive's "form over function" tendencies like the butterfly keyboard and removing magsafe, but I think it's fair to say there wouldn't have been an iPhone as it was originally released without Ive.
I feel like Apple still would have had a pretty similar in-store experience even if someone else besides Johnson originally launched it.
From my memory
David Kelley and IDEO designed the original Macintosh Hermutt Esslinger and Frog Design did I think the SE's and Macintosh II and II CI. Robert Bruner and Lunar Design did the Quadra's Robert Bruner also Hired Ives.
https://www.reddit.com/r/PathOfExile2/comments/1hwxc17/docum...
It isn't hard to imagine someone spending 16 hours of working and then going home and playing a game and putting in money to make themselves more powerful in the game.
His real full time job is watching alt-right videos and memes on YouTube and 4chan.
A great leader is someone who cares about you and helps you surface the best version of yourself. They understand there is a person behind the work and don’t neglect the human side. They mention the team behind projects when talking about successes and don’t blame others for failures.
A cult leader is someone with a hypnotic personality who puts themselves first before anyone else. They couldn’t give less of a shit about you or your sacrifices, and will fire you even if you sleep in the office to get more work done. They are selfish and narcissistic, believe they know everything, and speak about successes as if they personally did all the work.
That definition is too broad to be useful. Is the “objective” to make money at all costs, and the leader is willing to suck employees and themselves dry, even over protests? Or is the objective to build a free hospital in a poor country and everyone is so committed to the cause they are willing to make personal sacrifices?
> I think that more closely aligns with people that have been very successful.
Also aligns with scammers and other grifters who are now in jail. “Successful” is also too broad a term to be useful. One person may think that “being very successful” means being rich, while to another interpersonal relationships and a happy life are what matter.
Not as defined, and I did define them.
What matters is the explained distinction between the two types of leader, arguing exact semantics of individual words in the shorthand term isn’t productive.
Since the original poster I replied to used the word “amazing” (plus the context of the conversation), I used “great” to mean “Very good; excellent; wonderful; fantastic”, not “effective”.
None of the leaders in this conversation are good people. Elon's controversies are way past the point of "some people decry" (why even use a phrasing this convoluted unless you just want to signal that you don't agree with it?) and firmly in "lots of great people wouldn't touch it with a pole". Part of leadership is creating a safe work environment and shielding your companies/brands from unnecessary drama, and Elon has done an absolutely abysmal job at it lately.
But Ive post-jobs just doesn’t have the same track record. He’s had a few years, maybe he’s learned and matured. I hope so.
He's had 6 years to create something—anything!—so far
I also suspect it might go that way: post-Ive designs have been credited as being better, particularly around apple's laptops that were perceived as too heavily favouring form over function.
More realistically Apple's design is good because they take the iterative approach seriously.
I feel similar about Zuckerberg. That guy should just let the government break up his empire, let some other people run the pieces, and retire. Otherwise he just faces humiliation and being in over his head.
But I guess ego keeps these people going.
Some people should probably be stopped before they get too hungry. Same with some bald guy in Eurasia right now.
And all of the billionaires donate money, I don't think that sets him apart from the others all that much.
Now that he's gonna go soon (and having just a dozen more human gatherings and dinners left).. we can see that he fully doesn't give half an ounce of fuck after having received in full what he was after.
After much show, Buffet's "charity" money got simply inherited by his kids just as it was planned all along! (taxfree too, you plebs!)
The billionaires that care about the world are like Larry Page, tho still so evil.
Not really such an "early" retirement.
But he's doing charity now, while still having more than 100 billions for himself, so somehow he's a saint.
They are the literal sensei of Zuck the greenhorn and friends. So exceptional that the world thinks they're the exception.
Which is not necessarily a good thing, being more interested in reshaping the world even after it becomes clear their vision for the new world isn't making it better just leaves someone with lots of power and bad ideas, not a good combo.
I'd much rather have them chilling on the beach after being done with the limits of their competence.
Orphans and imports don't escape. Even those that are not born here (eg Musk), are eventually taught by society to do the same.
Zuckerberg and friends don't just want to chill on the beach: They want to 0wn it. A common property, then redefine it as "obviously not common", at the expense of others, so that no one else can chill on the beach, unless you provide compensation, for obviously infringing upon their rightful rights duh.
I recently read Careless People and I think it's hard to avoid the conclusion that he has been far out of his element for a long time.
If he has an ego, he probably really wants to have something is a Magnus Opus he can claim. It'll be interesting because good design is always a dance with other stakeholders. You see this with architects and other "designers" who sometimes go to far into art and forget that buildings do need to be used.
The worst part about the butterfly keyboard was that keys would stop working and fixing it would cost the same as a new laptop. I guess that's what you sacrifice when you design the laptop as thin as Ive envisioned.
I mean, if it were my company I’d also reserve the right to run it straight into the ground.
He’s a billionaire approaching 60. You don’t need to worry about him, his brand, or his reputation. If he cared about it that much, he could’ve stayed at Apple. He chose to move back closer to his family. He didn’t launch a new design firm because he needed it, but because he wanted to.
> He’s a billionaire approaching 60. You don’t need to worry about him, his brand, or his reputation.
"Interesting" take, was that projection?
What has LoveFrom produced in 6 years since Ive quit Apple?
https://www.moncler.com/en-us/sir-jony-ive-collaboration
Compostable red noses:
https://www.designweek.co.uk/issues/30-january-3-february-20...
King Charles' new seal:
https://www.wallpaper.com/design/terra-carta-seal-design-lov...
Ferrari’s first electric car (not yet released):
https://www.independent.co.uk/cars/electric-vehicles/ferrari...
Something vague with Airbnb:
These are two very different things. You can design a wonderful product but if there isn't a need for it in the market or your business people fail to sell it it can be a failure. Judging design based on sales makes no sense.
He's a designer, the success will be whether or not he invents a good design, an innovation to how AI is consumed.
There are plenty of well designed products in his history that weren't big sellers.
did Ive just land $6B+ for incarnating "Dr. Theopolis"?
Yes, https://youtu.be/1ZCxXjDlaf8 yes, I think he did... Good on Him! :DBut vision? I'm not so sure, he had great company and help along the way, and now that he has been left alone (arguably due to his own actions) he's selling the image of competence in areas that he hasn't demonstrated skills whatsoever. We'll see.
One need is being able to talk to ChatGPT in a whisper or silent voice… so you can do it in public. I don’t think that comes from them, but it will be big when it does. Much easier than brain implants! In an ear device, you need enough data of listening to the muscles and the sounds together, then you can just listen to the muscles…
I assume they want to have their own OS that is, essentially, their models in the cloud.
so, here are my specific predictions
1. Subvocalization-sensing earbuds that detect "silent speech" through jaw/ear canal muscle movements (silently talk to AI anytime)
2. An AI OS laptop — the model is the interface
3. A minimal pocket device where most AI OS happens in the cloud
4. an energy efficient chip that runs powerful local AI, to put in any physical object
5. … like a clip. Something that attaches to clothes.
6. a perfect flat glass tablet like in the movies (I hope not)
7. ambient intelligent awareness through household objects with microphones, sensors, speakers, screens —
Carmack has said that for VR/AR to get any traction, the headgear needs to come down to swim goggle size, and to go mainstream, it has to come down to eyeglass size. He's probably right. Ive would be the kind of guy to push in that direction.
I agree with the first 2 sentences, but not the last. Everyone and their grandmother knows size and bulkiness are big blockers to VR/AR adoption. But the reason we don't have an Apple Vision Pro in an eyeglasses form factor isn't an issue of design, it's an issue of physics.
Meta seems to have decent success with their Ray Bans, which can basically do all the "ask AI" use cases, but true VR/AR fundamentally require much bulkier devices, most of all for battery life.
Apple already tried a version of their headgear where an additional belt-mounted box and cable are needed. This was unpopular but necessary. It's up to Ive to make wearing a utility belt cool.
It just takes marketing.[1]
[1] https://previews.123rf.com/images/pressmaster/pressmaster110...
Engineering, not physics?
I doubt anyone would have believed you could have a phone with AI chips inside it that fit in your pocket 30 years ago.
People were already saying "Isn't it amazing that this computer that you can carry around in your hand is more powerful than a giant room of computers that NASA built to send astronauts into space" in the mid 90s, so while people wouldn't necessarily guess the details, I think they fully expected the technological advancements to continue apace.
Not sure it’s worth the hype but there are use cases. I do think it’s an interesting contrast with crypto, where there aren’t really.
But... I can already do this! My phone + CarPlay and/or my headphones actually works great. I don't see how a new device adds value or is even desirable. Unless you're going down the Google Glass/Meta Rayban path of wanting to capture video and other environmental detail you can't if my phone is in my pocket.
Where is that AI? For example, if I usually eat between 2-4 PM, and I'm in the middle of time square, start suggesting places to eat based on my transaction history, or location history of restaurants I frequent. Something like that would be useful.
If I have to ask, I might as well look at my phone most of the time. It'd likely be faster in most cases.
I don't need something like that, where it must be queried to be useful, like asking it to read back my text messages, but I sure would love it if when my wife messaged me, it was smart enough to play the message unprompted if my headphones are already connected and active
The constant need to query for information, rather than have useful information contextually pushed to me, fundamentally limits its utility once the novelty wears off. Without a sufficient complexity threshold (and this assumes accurate information and trust) its more work to query for things than it is to simply do them.
From a consumer perspective, thats not great.
Obviously since we're in lage-stage capitalism and everything is designed to extract profit out of you, we can't give commercial systems all our private data...
(Of course, it's "non-LLM" AI, which isn't particularly fashionable right now, but if we really want smarter AI agents we need to stop treating all problems as solvable with large language models.)
This comment was written by Claude.
I know people for instance who could not think of anything more depressing than working with computers for a living. But hey, they do them and I do me. That’s the glory of things.
There's no such thing as "subvocalised conversation". It's a pervasive sci-fi term that has no bearing on real life.
It might be an old guy finding a love for vinyl, but having a dedicated camera, a dedicated notebook, a dedicated music player ... makes making photos, writing down notes or listening to music somehow more ... meaningful to me. Maybe because I do not get distracted by the other 999 functionalities of the phone while I am trying to take photos, listen to music, or writing something down.
Why? Because looking at your wrist is much more efficient and fast than getting your phone out of the pocket.
A phone-connected 'AI necklace' to act as a trigger and communication device to an AI interface ( maybe even a small camera ) might be more convenient than fishing your phone out of your pocket as well.
I also am rolling back in certain areas, like writing instead of phone notes and such, but the idea of a wearable or portable chat bot device makes zero sense to me. It’s an added cost and yet another thing to lug around.
As it turns out though nobody seems to know _why_ they hired Ive or what they intend him to make.
More than anything it would broadcast a fear of opening up and showing who you really are to other people. So instead of risking saying something silly, you replace your sense of identity with a generic chatbot. Super cool.
It's like, I can read Wikipedia myself.
Somehow I don't think anyone is going to be impressed by someone regurgitating chatgpt.
I've read that interviews with Stephen Hawking are excruciating because he'd take many minutes to "type" up his response. Of course people are still engaged because it's Hawking and the answer is from his brain, someone pausing to interact with an LLM would be a bore indeed.
What I'm saying is, have a little bit more imagination and imagine someone seemingly in natural conversation, who is actually an LLM. Could they be engaging? IMO quite a bit more engaging compared to someone reading Wikipedia out loud. Would it be artificial? It still is. But would the conversation partner notice? Maybe not for a while. Would I hate it? Of course...
For example, a man had an AI girlfriend in a movie and she hired someone to keep her in her ear and follow instructions on what to say and do, so that she could be physically intimate with her human by borrowing someone’s body. Stuff like this could be interesting, people acting as surrogates for AI or just using AI to augment their conversation skills.
(since Neil Stephenson's recent essay brought that quote up)
Make the same with apple watch to make hand gesture covering your ear like listening to something and then you don't even have to pickup phone from the pocket.
I think there is a lot of way how iphone, apple watch, airpods (case as pendant) could deliver the best UX but it doesn't matter as long as siri sux.
I'll set alerts, an alarm, write on my hand, etc. and still forget that e.g. my kids have a half-day tomorrow… even when medicated.
I'd love to have a little voice in my head periodically reminding me of these things.
It sounds cool, and the idea of asking questions about your day seems like it would be cool, but a few weeks later I’m finding myself forgetting to take it with me. The value just isn’t there yet. (And why have a clip on microphone when everyone already has a microphone in our pocket?)
It’s a cool toy though. Also a creepy toy since it can double as an eavesdropping device.
I have a feeling these AI companies will fall back to selling our data for advertising purposes once these companies realize their core products aren’t valuable enough for consumers to want to pay for the cost of it.
As for selling data if consumers don’t want to pay for it: I commit publicly to never doing this. I will shutdown the company and return remaining capital to investors if consumers don’t want to pay for what we are building. So far, so good, and we were actually cash flow positive a few of the last few weeks.
I actually like your Limitless meeting transcription tool and have a subscription for that reason.
I wish your focus was on the software but rather than the hardware.
#1 request is simply the ability to export my data so that I can more easily load it into other tools to ask questions against.
You have a treasure trove of all of my meeting transcripts for the past year but I’m really nervous they will be lost forever at some point.
I can't but wonder though... are we slave to productivity?
What do we need this omnipresent help? I'm sure some people do. If you're CEO of a large company, if you are a doctor seeing hundreds of patients in a week, etc.
But me? An average middle age guy with 9-5 job doing white collar job at healthcare company?
I enjoy doing some things that are 'inefficient'. Is that a really a problem?
Now you just whip out your phone, look it up or ask an AI, get the answer and move on.
The second is more informative in a way, but so boring.
The point wasn’t knowing the fact, it was the discussion!
I feel like the most natural thing would be basically push-to-talk-to-AI:
1. Some sort of mic + earpiece that you can wear comfortably(e.g. airpods)
2. A wireless button that you can put on a ring to activate the mic in the most ergonomic way possible
3. Any time you press the button, everything you say gets sent to a running AI chat
Like a pin. With AI. And it would talk to you like a human, so we could call it the Humane AI Pin.
How did nobody thought about that?
1. It would be easier to use than a pin because it's connected to your hand. You can press it without anyone knowing that you are pressing it. You can presse it with a single hand in a comfortable motion. It doesn't fall around or look weird on your body. It also just be a ring because it only needs to connect over short-range to your phone.
2. Earbuds give you privacy, volume control, good mic quality, better battery life. Also again they are slightly more subtle than a large pin.
3. You don't make these stand alone hardware. You just have them talk to your phone and have it handle networking, camera, compute as needed.
Not at all comparable to humane AI pin...
No. They need all the data from your life.
They need to see what you see (camera somewhere), hear what you hear (hello microphones) and probably even more.
My bet is on some sort of tablet. Maybe kind of a book, or kindle, or something like that.
https://samueli.ucla.edu/speaking-without-vocal-cords-thanks...
So like a smartphone in your pocket connected to an earphone.
The whisper thing is nice. Sounds like a feature for next gen earphones.
Amazon Alexa already has this (albeit you need to whisper loud enough for it to hear), and replies in a whisper. It works with any earbuds, but is kinda useless until Alexa+ (LLM integration) is more widely available; and it would be nice to have it reply in a normal voice when using earbuds.
Silent speech recognition is already a thing [0], so pairing it up with an LLM would be straightforward.
There really isn’t too many ways to interface with AI
Ah. What synergy, what serendipity. Right there.
Nice photo by the way — https://d7bnjsbkcwmq2m.archive.is/HgpSJ/945183ffb15e984274fa...
Also it's worth noting that if automation in the right areas has its intended effects, costs of living should come down, making the cost of a human less, not more[0], and moving the bar for what is worth automating higher and higher
[0] modulo policy decisions that raise the cost of living back up
> Interesting - how do you reconcile mass unemployment with working economy? (And this is honest question from one that is invested and hopeful that their life savings won’t evaporate overnight)
Define "working economy." Ultimately, "the economy" is the market and the market is about satisfying the desires of the people with the money to participate.
The economy will keep working if there's mass unemployment. You reconcile that by realizing the economy isn't a system for taking care of everyone, or optimizing for mass human flourishing. Unemployment just means the money-havers have no use for you anymore, and you can go FOAD.
1. The people promising AGI are lying 2. The people promising AGI don't know what they're saying 3. The people promising AGI are hedging against AGI not eventuating but some intermediate value emerging. This is the most charitable read, but also totally at odds with getting people to invest, since the investment is predicated on AGI achievement
The correct answer is almost certainly "some people are silly, some people are grifting, some people think AGI is coming, but all the investment certainly benefits from people conflating AGI with a very good product instead of a world-changing achievement".
Eric Schmidt imagining AGI and then speculating that people will like, still be churning out apps, as if humans will need to do that sort of menial labour, just blew my mind and made me question many of the stories I had heard about his intelligence.
I dunno - I am rather thinking that they are hedging.
The man has produced some of the greatest design work in the last few decades. Sure there were missteps (particularly in the quest for thinness in the laptop products) but he led design on some of the most iconic products and some of the most widely used products of all time.
iMac G3/4/5. iBook. iPod/Mini/Nano/Shuffle. iPhone. MacBook Air. iPad. Apple Watch. Not one person or company even comes close to having that kind of influence globally.
Made Mark Cuban a billionaire.
Probably there is a big grey market in OpenAI shares, and this is a similar strategy.
minimaxir•8mo ago
https://techcrunch.com/2025/04/07/openai-reportedly-mulls-bu...
> OpenAI is said to have discussed acquiring the AI hardware startup that former Apple design lead Jony Ive is building with OpenAI CEO Sam Altman. According to The Information, OpenAI could pay around $500 million for the fledgling company, called io Products.
How the heck did the price go up 13x?
whatshisface•8mo ago
Too bad we can't short it or otherwise stop it, because investment for the things we could start will dry up once the world figures this out. We're all correlated to companies like FTX whether we like them or not...
jazzyjackson•8mo ago
bravoetch•8mo ago
cma•8mo ago
manquer•8mo ago
Fund managers and staff have also disincentives for early exits, i.e. they have to find and invest in another company and cannot just keep the money, which means more work. They rather exit by switching stock to a hotter in demand, hard to get in companies if they can.
[1] there are always some employees and founders who would prefer some liquidity , but either they don’t hold large enough positions (employees) or investors don’t want to give a lot of liquidity (founders)
For public companies it is different- buybacks work because there is always someone ready to sell. Usually retail but also short term funds who don’t care about liquidating. ETFs and other very institutional investors or those into buffet style long term investments will not sell easily
bsimpson•8mo ago
As I write this out, it reminds me of another polarizing leader who has been really good at being in the news every day for the last 6 months, and for a 4 year period a decade ago.
ergocoder•8mo ago
If we could, then forget OpenAI. I would short every private company and end up richer than Elon Musk because 99% of the private companies fail.
“Wanting to short a private company” is such a weird comeback. Like yeah private companies most likely fail. Everyone knows.
AdamN•8mo ago
UX will make or break any major new AI product - especially hardware. The price is steep but I think it's actually a sensible move. There really aren't that many other people with the proven ability to deliver when it comes to UX at scale for novel areas.
babypuncher•8mo ago
teruakohatu•8mo ago
Ive is a very talented artist but AI is not being held back by people unwilling to courageously make things thinner and thinner.
I would imagine Ive looked at an Apple HomePod and thought “we could make this beautifully flat and hang it on the wall of every room in the house”. This might be a good idea but it in no way solves the major problems with AI/LLMs.
fooker•8mo ago
Jony Ive is great at UX when someone like Steve Jobs is there to veto stupid ideas.
bbor•8mo ago
philosophty•8mo ago
ignoramous•8mo ago
For whoever is holding the bag at the end of it all.
smokel•8mo ago
outside1234•8mo ago
jrflowers•8mo ago
Because they’re not paying with money. It’s $6.5B of pure equity in a private company that they’ve decided to value at $300B based off of… vibes or hopes or whatever?
bdangubic•8mo ago
mrandish•8mo ago
I'm not even a hardcore AI skeptic, I think AI can be useful and valuable in the near-term (even outside coding!) and potentially transformative in the long-term but I also think current capabilities are over-hyped and wildly overvalued. I think AI is going through the typical hype cycle (https://en.wikipedia.org/wiki/Gartner_hype_cycle) and we're currently late in the "Inflated Expectations" phase soon to be followed by the inevitable "Trough of Disillusionment".