Waymo is clearly growing fast, but it's not bigger than Uber—yet. Super impressive its already surpassed Lyft. From personal experience, it also seems to have driven Uber prices down materially.
Page 6: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
Page 302: https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
Is this really relevant? Google was formed when there were no gazillion phones to do searches a million times a day.
ChatGPT was formed recently, when every strata of society, and every country in the world has double digit internet penetration.
I saw another one recently that said something like "ChatGPT has 350 millions unique visits per month, if it were a country, it would be the 3rd largest in the world"
Or something along those lines.
I hate this so much I actually ran the numbers and saw that per capita box office revenues have remained generally stable since the 1980s.
Instead, we have this monstrosity of metrics that make no sense.
Data scientists bring out relevant, thought-provoking metrics. They work for the people who work for the people who are the target audience here.
With all due respect, I’ve seen more corporate drivel and slide-show sugar out of data scientists than VCs. (Largely as a product of attention span.)
Meeker is illustrating a material change in the environment.
However this is relevant because this is an investor report helping people forecast, and this stat helps calibrate readers expectations of just how fast a product can scale in this day & age, using a relevant comparison of products in the same category that when launched offered the same step change in value.
Also, quantity has a quality of its own.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
It’s kind of obvious that new eras will produce faster scaling. But what if you ran the numbers and it wasn’t true?
There are plenty of times when this happens, the obvious is actually something different. This time isn’t the case but that’s the point of research, to back up common sense with evidence.
Also, it is very different to know that it is faster vs it is 5.5x faster. The 5.5x might not be completely accurate but it’s more in depth than just your intuition.
There is wisdom in simple, profound statements that open up new lines of thought. But There is also wisdom in doing research to make things you already know quantified and more concrete.
One example of research being wisdom is demographics. It’s one thing to know that there are more whites than blacks in the US, it’s another thing to know that there are 200m whites and 40m blacks. The numbers shed light into precision and also validate or clarify your thinking. For instance maybe you thought that blacks should be the second largest demographic since they have been here longest. Not so, Hispanic is at around 60m. Or maybe you knew that already. But if you wanted to argue with others about demographic growth and what is actually happening in immigration, knowing the numbers is wisdom, and going off of intuition leads to “they took my job” hot takes.
Maybe this will turn out to be a valuable user segment, but I'm not sure.
Practices learned during school do enter the workforce. Messaging during class turned into messaging during meetings.
So, if 100% try using it to do their work toil for them...
Yes. The question isn’t which team is better. The question is how fast they can grow. Similar companies in Pakistan and America will grow disparately. Same for companies on the Internet in 1998 versus, practically, 2022. That isn't fair. But nobody cares about fair, we’re measuring what’s true. It’s fair, from those data, to conclude that the latter should beat the former's record, whether separated by space or time.
(You correctly conclude, from that slide, that Google grew in a less favourable environment than OpenAI et al. You just need to take it one step further into the potential rate for growth and disruption today versus in the past. Put another way, Google could be disrupted quicker than it could disrupt.)
I am sure a majority of those are from India.
A small industry has formed here imparting "AI" training, for very cheap, online and courses with duration of a few months to 2 years.
https://www.shiksha.com/online-courses/artificial-intelligen...
India School of Business, one of the highest ranked business schools in India, is offering leadership courses with a touch of AI, for as little as 10 lakh rupees (about 11000 USD). For a majority of IT folk in India, that is a very reasonable amount for a course from a very reputed institute.
Many IT folk here are scared to the core about their jobs and there has been a mass movement towards AI certificates. While the courses teach the basics, none of them are at a university level. Most of the students are only users of tools though.
Would you call them AI developers? Meh. They get by. Most of the work in India is back-office work anyway, and these AI engineers end up doing data related tasks mostly.
Very few are actually building worthwhile AI stuff.
This is also interesting to see. ChatGPT currently has about 20 million paying subscribers and 400 million weekly active users.
Did 100 million people use chatGPT a few months into opening. Sure, but that was because of massive hype and word of mouth viral moments.
Is is comparable to Internet usage, especially from "years launched" X axis. Definitely not.
> 59 AI User Adoption (ChatGPT as Proxy) = Materially Faster + Cheaper vs. Other Foundational Technology Products
Another pointless comparison.
Corporates are going to talk about the current in-vogue thing always.
They talked about Blockchain.
They talked about crypto
They talked about anything that when not spoken about made the investors feel the board is behind in the world.
> 72 Enterprise AI Adoption = Rising Priority…Bank of America – Erica Virtual Assistant (6/18)
Ok fine. People are using it. 2 important questions. Did the users have other choices, over which they chose this. Did the users feel happier than other methods.
Without this data, this is just feeding the hype train.
* The entertainment industry will be transformed drastically. Music and movies will be transformed by AI to such an extent that the next generation will find it hard to believe how the industry operated.
* The moonshot will be biological research and research in general. When a breakthrough happens, it will transform our health for the better in astonishing ways.
* In terms of direct adoption, the urban-rural divide is vast.
* Less democratic countries will have an advantage over the democratic countries in terms of fast execution, unless the latter manage to integrate private-public operations effectively.
* I've liked Mary Meeker's reports since the heydays of TechCrunch. This report has a lot of details that I did not know. Nevertheless, I didn't see a single point that stood out.
This argument is like a classic. But on which time frame is it supposed to be operative? Is that back with any empirical data to test the claim?
Less involvement of edge nodes in the decision process is also encouraging "don't give a shit to suggest improvements" and "utter whatever lie is expected by the system to avoid claims of dissidence".
There are actual pragmatic benefits in democratic systems, it's not only pure idealistic motivation at stake that can argue in their favor.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
As someone who was very worried about how this would impact artistic output I've started to change my mind. It seems younger people are extremely sensitive to AI content and happy to call out anything that could conceivably be generated by AI as slop. People want real art created by real people with real skill. They want to be able to connect with their art and connect with them in person. The muzak industry is in trouble but I no longer think music in general will be replaced by AI. We'll see improvements to software instruments, plugins etc but AI improving the tools is a different prospect than fully AI generated music.
In music I think its similar to the claim that people using samples arent real musicians. Its a take many will have, but many others will have no problem enjoying stuff created with the help of AI.
Having never ridden in a Waymo, I’m curious if anyone sees a reason those trends won’t continue in SF or replicate elsewhere.
This does not pass the smell test.
https://www.yum.com/wps/portal/yumbrands/Yumbrands/news/pres...
Above is the press release. AI comes up 3 times, including the title. Release talks about AI-Driven platform to streamline operations of franchisees. Ok fine. But how exactly does AI enhance operations at the franchisee level? Especially if all they are providing as part of the SaaS are "Backed by artificial intelligence, Byte by Yum! offers franchisees leading technology capabilities with advantaged economics made possible by the scale of Yum!"
I mean, sure, if at their corporate office, they are using data analysis to predict demand and allocate resources or raw materials to improve profitability, ok. But that can be done quite effectively with statistical analysis.
Where exactly does AI come into the picture is unclear.
Yet, the slides show this as some sort of a monumental achievement, especially highlighting "25000 restaurants are using at-least 1 product". Sure, they are going to use, if that is what the franchise owner provides them, probably for additional cost.
Yum brand has 61000 restaurants as per their website. Looks like they rolled out a new solution, and about 1/3rd have been successful in adopting the new platform. Others may be in the line to do the same. Is this related to AI, or is this related to regular software changes / updates / revamps?
This strikes so close to the infamous Dropbox comment [1].
The common interactions restaurants have with technology is in bookings and online menu presentation. The latter historically required hiring a lightweight web dev. That’s now irrelevant.
It's replacing drive-through attendants [1].
[1] https://investors.yum.com/news-events/financial-releases/new...
Many of the slides are packed with details.
Who, exactly, has time to read all that stuff?
Only AI models, I'd guess.
(yesterday evening I read about 400 pages of text as entertainment, it is not much - though admittedly this specific one failed as entertainment and it was not content worth reading)
I am genuinely curious if lay people will pay for AI. I know people who spend literally hours daily on YouTube and complain about the ads and don’t want to pay the quarter a day to get rid of em.
Will these people pay $50 a month for gpt? We will see.
But asking if average people will pay for AI is the wrong question. It's like asking if average people will pay for Salesforce or Oracle. Even if consumers don't pay for AI en masse, their employers will as the value proposition is a lot clearer.
We really don't know, in part because they're cooking all kinds of costs into R&D. My hunch is we'll recapitulate the dark fibre of the 1990s, with OpenAI et al burning capital to develop models others profit off. But that's predicated on the assumption that large models can be effectively distilled into cheap-to-run small ones. If that doesn't prove right, or the frontier keeps advancing for decades, the operating model of a high-R&D industry could resemble Intel (and semiconductors broadly) instead.
But probably everyone will adopt it - so it may not give a competitive advantage to any given company.
Also, access to market leading AI is not going to cost $20/mo when everything is said and done.
[1] https://blogs.worldbank.org/en/developmenttalk/half-global-p...
Just like paying for a smartphone or data or broadband.
I think it will be similar in AI. The AI will be free or cheap; the money will be made in its application. I won’t be surprised if my kid looks back on Anthropic and OpenAI the way we look back on Sun and Netscape.
"Charts paint thousands of words..."
"Leading USA-Based LLM User" -- what does that even mean? With a value of "800MM" where MM is what units?
"AI Usage + Cost + Loss Growth = Unprecedented" -- how can you add three things that don't appear to be commensurable? Also, what is "Loss Growth" and how does it add to "Usage"?
There are plenty more examples in the charts later. The Overview section, while not so obviously AI-written, has an over-the-top enthusiasm and loose structure that makes me squeamish. I don't trust it, but that's just me I guess.
Just millions. (“M” is the Roman numeral for a thousand.)
For the other stuff like automating white collar jobs, good enough might not suffice due to the intricate dependencies and implicit contracts formed naturally out of human groups.
Creative jobs will be the most impacted by "good enough" depending on the number of features. For 2d art it was almost certainly over (unless you add text feature to it like manga). You can see with increasing features, like starting with general photography, stock photography and now product photography are overnight made redundant. ex) with the latest Flux image editor negates a need to hire a photo editor, photographer, camera equipment, lighting, product artist. Veo3 not quite there but handles speech features in video generation that other models did not and getting closer to replacing videographers. I think 3d model is the next frontier here following the trend but is still quite difficult as it involves mesh generation/texture/rigging/animation/physics that also must come with shaders and interaction with other 3d models.
Software engineering falls somewhat in the creative field but also shares the complexity from white collar jobs for the same reason that will prevent it from being completed automatable with "good enough".
The hallucination issue is less of an issue and an old trope. The truly challenging enemy of AI of "good enough" is due to "not enough context" and "poor context compression and recall". The problems I listed in white collar and software engineering jobs is context problem. The compression of contexts cannot be stable as the former isn't solved. The fast efficient recall of contexts then cannot take place due to poor compression and so on.
This is just my observation of seeing how things are progressing. I do feel that we will see something different from LLM altogether that could solve some of the context issues but a major misalignment of incentives is what I think would prevent an AGI know-all-see-all type of deal. ex) you might not have any incentive to share all the essential context with the AI because you might become irrelevant and want it to stay in the dark. you might have a union or some social organization to legislate monopoly of human knowledge/skill workers in a field.
but perhaps THE most difficult problem even after we solve the context problem is the inability for the God AGI to be awake or conscious which is absolutely critical in many real world applications.
I like to focus more on the very near impact of what AI is currently doing in the labs and its impact on humans than worrying about who and when all of the other problems are going to be addressed.
Whether we get a UBI-first socialist world order or a continuation of technological feudalism with the poors still using GPTs while the rich sell the energy and chips (software would almost be worthless on its own by then) is the least of my concern.
I'm an optimist and I'm very excited for the very-near and immediate impact of our currently available AI tools doing the "good enough" in very positive ways.
In other cases, like software development, there is a split between tasks of a narrow scope and those of a wide scope. Creating one-shot pieces of software is kind of a solved issue now. Maintaining some relatively self-contained piece of software might soon turn into a task for single maintainers that review AI PRs. The more the bottleneck is context tracking, as opposed to producing code, the less useful the AI. I am uncertain, however, how the millions of devs in the world are distributed on this continuum.
I am also skeptical about legal protections or unionization, as many of these jobs are quite suited to international competition.
Now with headless agents (like CheepCode[0], the one I built) that connect directly to the same task management apps that we do as human programmers, you can get “good enough” PRs out of a single Linear ticket with no need to touch an IDE. For copy changes and other easy-to-verify tweaks this saves developers a lot of overhead checking out branches, making PRs, etc so they can stay focused on the more interesting/valuable work. At $1/task a “good enough” result is well worth it compared to the cost of human time.
Search. AI works about as well as a clueless first-year analyst. You need to double check its work. But there is still value added in its compiling together sources and providing a reasonably-accurate summary of, at the very least, some of their arguments.
Also basic web development. Most restauranteurs I know no longer deem it necessary to hire a web developer. (Happy side effect: more PDF menus instead of over-engineered nonsense.)
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
Looks like yet another "numbers going up" report, not going to spend time on reading 300+ slides like that.
Maybe I am unfair, but I have no time to read every single report and I am willing to read only exceptional ones with no (or very limited) marketing slop.
One stylistic quirk is its liberal use of `=` and `+` for things that aren't equivalence or summation, which keeps throwing me off.
Does she ever do a presentation of these, talking through / commentary on the slides? If a recording of that, I'd 100% clear half a day's worth of meetings to watch it.
Her use is not the math sense, it's the conceptual sense. In AI space, a common example is:
king - male + female = queen
Or think of it as alchemy: philosopher's stone + mercury = gold
lancewiggs•1d ago
sagarpatil•22h ago
jliptzin•21h ago
bitpush•11h ago