There is a big upside potential for high growth companies taking advantage of technology trends.
Today, Google’s revenue is £263.66 Billion. This is nearly 300x the revenue Google generated in 2003 ($961.9 million). The company went public on August 19, 2004, at $85 per share, valuing the company at $23 billion. After the IPO, Google reported $1.47 billion in revenue for fiscal year 2003, with a profit of $105.6 million.
They weren't a bunch of gremlins in a cave conspiring to commit "anti-trust violations" in 2005. They were smart as hell and invested in the right areas.
Microsoft would get hit with the same anti-trust Google is being hit with if Bing and Windows Phone were successful - they're getting away with it because they're terrible.
“I don’t like the law and its application” isn’t an argument.
If you allow one company to achieve market dominance, it suffocates the ecosystem and stifles evolutionary growth pressures. It's concentrated malinvestment into a local maxima that salts the playing field so thoroughly that escape velocity is unattainable by anyone else.
There are models of this. And historical anecdotes and evidence.
You still killed the man regardless of intention.
Winning for who? Not for society as a whole, that's certain.
To put it in money terms so even you can understand it, how much time has been wasted globally because Google is peddling ads and spam sites instead of pointing people to useful results?
Is that free? We should substract it from the GDP calculations if you ask me...
Inefficient markets are bad for humans and are bad markets. The allocate resources inefficiently. The google graveyard is (arguably) a case in point.
The reason Khan reached the FTC was because her thesis at college, made the case that amazon’s actions reduced customer welfare. A fact that was covered here, on HN. This isn’t something a community notices unless it matters to them.
To put some context on this, 78% of Google’s revenue is advertising. Overall US ad spending has been increasing at about 1.6% per year since 2001, with no obvious indication of an acceleration (beyond some bumps around 2007/8.) So is there actually a success story beyond market capture here? And if all we’re doing is concentrating existing business into new channels, is this something we should be excited about?
Wealth creation?
Google provided a toolkit to test ads and figure out which are most effective. Now the other side of that argument is that in industry, a massive of percentage of qualified people still spray and pray. The advertising industry as a whole is far from data driven.
At one point, there was an argument this was good for the planet. My newspapers are much thinner than they were 30 years ago when I could collect a metre of newsprint a month if I subscribed to the Globe and Mail plus a local. But I don’t think anyone can claim now that data centres are environmental miracles. This has also decimated local journalism to such a point that people are less aware of environmental catastrophes in their own relative backyards.
It’s possible the net effect was positive and advertising is more efficient. It’s more accurate to say advertisers have a toolkit to analyze effectiveness but many don’t or aren’t capable.
Edit - I’m going to give a very specific example of a radio jingle. If anyone is around forty or older and from a major city in Saskatchewan, they will be able to finish this.
“I said no no no no don’t pay anymore, no GST and no money down.”
The way I remember it, rates were already raised substantially, during the first Trump admin, not due to inflation, lowered again during COVID, and raised due to inflation, where a likely causative factor in the inflation being high government spending and stimulus.
You can clearly see non-zero interest rates from 2016 to 2020...
Egg inflation is Avian flu related, housing is because of a housing shortage caused by building pipeline impairment since the 2008 GFC (which will likely never recover, due to structural demographics and no appetite to allow immigration at the level required for the construction trades).
(~4M Boomers retire a year, ~11k/day, ~2M people 55+ die every year, about half of which are in the labor force; that means ~13k-14k workers leave the labor force every day in the US)
https://www.axios.com/2024/06/27/labor-shortage-workforce-ec...
http://charleshughsmith.blogspot.com/2022/08/are-older-worke...
https://www.stlouisfed.org/timely-topics/retirements-increas...
Plus a lot of the inflation was caused by increasing energy prices in 2022, totally unrelated to government spending.
The interesting thing, to me, is how speculative OpenAI's bet is.
IIRC it was 2019 when I tinkered with the versions of GPT 2.0 that had web interfaces, and they were interesting toys. Then I began using ChatGPT since its launch, which was around Dec 2022, and that was a profound paradigm shift. It showed real emergent behavior and it was capable of very interesting things.
2019 - 2022 was three years. No hype, no trillions of dollars invested, but tremendous progress.
Now, there has been progress in the part ~three years in synthetic benchmarks, but the feeling with ChatGPT 4.5 today is still the same as it was with GPT-3/GPT-4 in 2022. 4.5/o3 doesn't seem hugely more intelligent then 3.0 -- it hallucinates less, and it's capable of running web searches and doing directed research -- but it's no paradigm shift. If things keep progressing the way they're going, we'll get better interfaces and more tools, but it's far from clear that superintelligence (more-than-human insight, skill, and inventiveness,) is even possible with LLMs.
I think you're misremembering how 3.0 worked. Granted, the slope from 2.0 to 3.0 was very steep, but a ton of progress has happened in the past few years.
Also, insofar that they're researching a topic, they're not sufficiently critical. They are highly influenced by what they read, and they tend to take the results they're given more or less at face value.
For instance, I asked about a medical product for my girlfriend. I explicitly asked for a critical look at its efficacy, any studies in the area, etc.. and it seemed like 90% of the sources it considered were from the product's own website. It basically gave me an uncritical reporting of what they said about their own product.
Ie. better datasets.
People are expecting it to get exponentially better, but these kind of innovations are more a inversed power law.
We look at CPUs or the transmission of digital data and these seem to have improved exponentially but these are rather exceptions and are composed of multiple technologies at different stages. Like how we went from internet through phone lines, to dedicated copper lines for data, to optic fiber straight into to people's homes.
Eg: look how the efficiency of solar cells has progressed over the last 50 years
The internet was adding a trillion dollars to the global economy by 2008 and the end of that rapid expansion, where AI is still sucking hundreds of billions a year into a black hole with no killer use cases that could possibly pay off its investment, much less begin adding trillions to the global economy.
And a decade before the web and internet explosion, PCs were similar, with a massive build out and immediate massive returns.
This excuse making for AI is getting old. It needs to put up or shut up because so far it's a joke compared to real advances like the PC and the Internet, all while being hyped by VC collecting companies as the arrival of a literal God.
This is not entirely true, or at least the trend is not necessarily less hallucination. See section 3.3 in the OpenAI o3 and o4-mini System Card[1], which shows that o3 and o4-mini both hallucinate more than o1. See also [2] for more data on hallucinations.
Just recently I took a screenshot from a jira Burndown chart to write a description of the sprint progress for our stakeholders. Did it in one shot from a screenshot and got it right.
Generative AI was a sort of paradigm shift, and can be developed into interesting tools that boost human productivity. But those things take time, sometimes decades to reach maturity.
That is not good for the get rich quick machine of Venture Capital and Hustle Culture, where quick exits require a bunch of bag holders.
You gotta have suckers, and for that Gen AI cannot be an "interesting technology" with good potential. It needs to be "the future", that will disrupt everything and everyone.
We had big progress in AI in last 2 years but have to take into account more than text token generation. We have image generation that is not only super realistic but you just text what you want to modify without learning complicated tools like ComfyUI.
We have text to speech and audio to audio that is not only very realistic and fluent with many languages but also can express emotions in speech.
We have video generation that is really more realistic every month and taking less computation.
There is big progress in 3d models generation. Speech to text is still getting improved and fast enough to run on phones reducing latency. Next frontier is how AI is applied for robotics. No to mention areas not sexy to end users but in application in healthcare.
Heck, I wrote myself my own personal radio moderator in a few hundred lines of shell, later rewritten in Python. As a simple MPD client. Watch out for a queued track which has albumart, and pass the track metadata + picture to the LLM. Send the result through a pretty natural sounding TTS, and queue the resulting sound file before the next track. Suddenly, I had a radio moderator that would narrate album art for me. It gave me a glimpse into a world that wouldn't have been possible before. And while the LLM is basically writing the script, the real magic comes from multimodal and great sounding TTS.
Much potential for really cool looking/sounding PoCs. However, what makes me worry is that there is not much progress on (to me) obvious shortcomings. For instance, OpenAI TTS really can't speak any numbers correctly. Digits maybe, but once you hand it something like "2025" the chance is high it will have pronounciation problems. In the first months, this felt like bad but temporarily acceptable. A year later, it feels like hilariously sad that nothing has been done to address such a simple yet important issue. You know that something bad is going on when you start to consider expanding numbers to written-out form before passing the message to the TTS. My girlfriend keeps joking that since LLMs, we now have computers that totally can not compute correctly. And she has a point. Sure, hand the LLM a tool to do calculations, and the situation improves somewhat. But it seems to be underlying, as shown by the problems of TTS.
Vision models have so many applications for me... However, some of them turn out to be actually unusable in practice. That becomes clear when you use a vision model to read the values off a blood pressure sensor. Take three photos, and you get three slightly different values. Not obviously made up stuff, but numbers that could be. 145/90, 147/93, 142/97. Well, the range might be clear, but actually, you can never be sure. Great for scene and art descriptions, since hallucinations almost fall through the cracks. But I would never use it to read any kind of data, neither OCR'd text nor, gasp, numbers! You can never know if you have been lied to. But still, some of the byproducts of LLMs feel like a real revolution. The moment you realize why whisper is named like that. When you test it on your laptop, and realize that it just transcribed the YouTube video you were rather silently running in the background. Some of this stuff feels like a big jump.
Human level object recognition can easily be trained up for custom use cases. Image segmentation is amazing. I can take a photo of a document and it’s accurately OCRd. 10-15 years ago that would be unfathomable.
I think current LLMs would give AI a much better reputation if they focused on non generative applications. Sentiment analysis, translation, named entity extraction, etc. these were all problems that data folks have been wrestling with that could very well be seen as “solved” and a big win for AI that businesses would b able onconfidently integrate into their workflows, but instead they went with the generative route and we have to deal with hallucinations and slop
However, while OCR done by vision models feels neat, I personally dont feel like it changed anything for me. I have been using KNFB Reader and later Seeing AI, and both have sufficiently solved the "OCR a document you just photographed" use case for me. They even aid the picture taking process by letting me know that a particular edge of the document is not visible.
Besides, I still don't understand the actual potential for hallucinations when doing OCR through vision models fully. I have a feeling there are a number of corner cases which will lead to hallucinations. The tendency to fill in things that might fit but aren't there is rather concerning. Talking about spelling errors and numerical data.
They most certainly cannot
I wonder what “hugely more intelligent” would look like to you?
Also, in the (rather short) history of computing, what “hugely more X” has happened over the course of a couple of years?
For context:
- GPT 2 was released in Feb 2019
- GPT 3 came out roughly 18 months later in 2020. It was a huge jump, but still not "usable" for many things.
- InstructGPT came out roughly 18 months later in early 2022, and was a huge advancement. This is RLHF's big moment.
- About 10 months later, ChatGPT is released at the end of 2022 as a "sibling" to InstructGPT. It's an "open research preview" at this point. This is around the time OpenAI starts referring to certain models as being in the "3.5 family"
- GPT-4 comes out in March 2023, so barely 2 years ago now. Huge jumps in performance, context window size, and it supports images. This is around the time ChatGPT hits 100 million users and is really becoming a reliable, widely adopted tool. This is also the same time that tools like Cursor are hitting the market, though they haven't exploded yet. Models are just now getting "good enough" for these kinds of applications
- GPT-4-Turbo comes out in November 2023, with way larger context windows and lower pricing.
- About 12 months ago, GPT-4o released, showing slightly increased performance on existing benchmarks over 4, but now with state-of-the-art audio capability support for something like 50 languages.
- 5 months ago, o1 releases. This is a big moment for scaling compute at test time, which is a major current research direction in ML. Shows huge improvements (something like 8x over 4) on some math/reasoning benchmarks. Within months, we have o3 and o4, which substantially improve these scores even further.
- In February of this year, we get 4.5, and then months later, the confusingly named 4.1, which shows improvements over 4o.
So to be clear, in 2019 we had an interesting research project that only a few people could tinker with.
18 months later, we had a better model that you could play with via an API, but was still a toy.
It takes more than two years to go from that to ChatGPT, and a few more months (nearly 3 years total) to get to the "useful" version of ChatGPT that really sets the world on fire. It took roughly 4 + 1/2 years to go from "novelty text generation" to "useful text generation".
In the 2 years since then, we've gotten multimodal models, a new class of reasoning models, baseline improvement across performance, and more. If anything, there is more fundamental research and wider variety of directions now (the kind of stuff that shifts paradigms) than before.
Yeah, sorry, but your view is not supported by data. There are billions of $ spent on tokens every year (between oAI, anthropic, goog, etc). This is not a toy anymore. People are using it, one way or another so it's useful to them - to the tune of billions of dollars. How useful it'll turn out to be in the abstract is still up for debate, but it is useful today.
If the product were to disappear tomorrow, I doubt the real economy would notice the loss of 0.05% in productivity.
[1] https://www.technologyreview.com/2025/02/25/1111207/a-nobel-...
It’s also improved my ability to analyze data, generating graphs and insights. But I still need to run that last mile myself, because I can’t fully trust its output. Same for web search actually, when I have a need to be comprehensive.
That's the step that still seems to be missing
Among knowledge workers in general, ChatGPT is used widely for basically any task that requires writing or researching.
This is obviously not going to be true for every single knowledge worker in every single role, and it seems that you don't find it particularly useful, but the volume of paying users is hard to dismiss out of hand.
I disagree with 3.0, but perhaps that feels true for 4.0 or even 3.5 for some queries.
The reason is that when LLMs are asked questions whose answers can be interpolated or retrieved from their training data, they will likely use widely accepted human knowledge or patterns to compose their responses. (This is a simplification of how LLMs work, just to illustrate the key point here.) This knowledge has been refined and has evolved through decades of human experiments and experiences.
Domain experts of varying intelligence will likely come up with similar replies on these largely routine questions as well.
The difference shows up when you pose a query that demands deep reasoning or requires expertise in multiple fields. Then, frontier reasoning models like o3 can sometimes form creative solutions that are not textbook answers.
I strongly suspect that Reinforcement Learning with feedback from high-quality simulations or real environments will be key for these models' capabilities to surpass those of human experts.
Superhuman milestones, equivalent to those achieved by AlphaGo and AlphaZero between 2016 and 2018, might be reached in several fields over the coming years. This will likely happen first in fields with rapid feedback loops and highly accurate simulators, e.g. math problem solving (as opposed to novel mathematical research), coding (as opposed to product innovation).
I think that's the key threshold all these companies have been running up against, and crossing it would be the paradigm shift we keep hearing about. But they've been trying for years, and haven't done it yet, and seem to be plateauing
And then in OpenAI's case specifically- this tech has become commoditized really quickly. They have several direct competitors, including a few that are open, and their only real moat is their brand and Sam's fundraising ability. Their UX is one of the best right now, but that isn't really a moat
It doesn't really matter how speculative the AGI bet is, their consumer AI business by itself is basically guaranteed to drown them in money. The only reason they're making losses at the moment is because they're choosing not to monetize their free tier users with ads, presumably since they don't need to make a profit and can prioritize growth.
But the moment they flip the advertising switch, their traffic will be both highly monetizable and ludicrously high margin.
1. People won’t ultimate go download ChatGPT.app or use a website — they’re going to be using the functionality through structured services in iOS and Android, and it’s necessarily going to be under control of the OS vendors for security/privacy reasons. This doesn’t mean Apple and Google own the LLMs — there will be consumer choice for antitrust reasons if nothing else — but the operating system has to be a conduit for access to your data, and also for unified user experience. Which means advertising will be limited.
2. Say it does go the way you think it will — what prevents a real non-profit, open source LLM from taking it away from the commercial players? There really is no moat (other than money, energy, data center space).
But it seems pretty obvious that the math must be based on some incorrect assumptions. The unit inference costs of high quality LLMs are much lower than the unit costs of serving high quality web search queries. The LLM costs have also been decreasing rapidly -- 1000x in two years seems like a fair estimate -- while search engine costs haven't. (If anything they've been going up.)
And web search is obviously a business that is very profitable with the ad model, despite those higher unit costs.
> OAI will need more than ChatGPT, pro and "free" to be anything close to web search for ad revenue.
I don't understand this part at all. Revenue is proportional to traffic and ad rates. Why would the ad rates be lower for chatbots? Or why would their traffic obviously be lower?
This is also moving the goalposts. Search ad revenue is what, a $250B/year market. OpenAI doesn't anywhere near that much revenue to be fabulously profitable. A tenth of it would already be more than enough.
I think most people will continue to use Google and its Gemini-generated summaries at the top.
They already have half a billion MAUs. I'm pretty sure what I said is true already on that user base. It's not contingent on chatbots replacing search as a whole, all that's needed is their existing traffic having a decent proportion of monetizable queries (this seems basically certain) and for them to come up with ad formats that are effective without driving users away.
- Go down to your local Ray-Ban store.
- Ask to play with a pear of the Meta glasses.
- Activate "Live AI mode"
- Have a real time video conversation with an AI which can see what you see, translate between languages, read text, recognize objects, and interact with the real world.
Contrary to your (potentially misremembered?) history, nothing at all like this was possible in 2019. I remember finetuning an early GPT-2 (before they even released the 2B model!) on a large corpus of Star Wars novels and being impressed that it would mention "Luke" when I ran the produced model! Now I wear it on my head and read restaurant menus with it. Use it to find my Uber (what kind of car is that?) Today I am building my raised garden beds out back and reading the various soil amendments I purchased, talking about how much bloodmeal to put over the hugelkultur layer, having it do math, and generally having a pair of eyeballs. I'm blind. The amount of utility I get out of these things is ... very hard to overstate.
If this's "moribund," sign me up for more decay.
It’s the “boring” stuff that’s interesting: automating drudgery work, a better way to research, etc.
I’ve been predicting for years that glasses — whether AR or VR — are and will remain niche. I don’t think most people want them.
Which makes sense, really. Buying an expensive pair of glasses so that I can point at an object and say “what is that” is a cool parlour trick but I can imagine very few scenarios where I’ve wished I had that functionality.
Realtime translation… absolutely a feature I’d use. For a week a year, max. IMO the killer, everyday application just isn’t there yet. I’m still not sure what it would be.
Even post dotcom bust there was still this generally shared understanding that you probably shouldn't meet people you met on the Internet and you shouldn't get in strangers cars. Today we use the Internet for the purpose of summoning strangers to us for the purpose of getting in their cars, and it's completely banal to do so.
These shifts take time and in the early-adopter stage the applications look half-baked, or solutions looking for a problem. That's because most of them are. The few that aren't will accrete over time until there's a solid base.
So, lots of people were meeting online in 1995. 5 years later couples who met on the internet were all over the place. When I moved to Silicon Valley in 2000, "I met my wife online" was hardly even a conversation starter. (I did meet my wife at The State of Insanity web chat boards in 1995.)
Also, from 1998-2008, the world invested precisely what it's now invested in the last decade of LLM AIs and was already adding over a trillion dollars annually to the global economy. AI, with that same spending over about the same amount of time, is still an economic black hole and no one has any answers that would satisfy anyone other than VC about how it's all going to pay for itself.
The internet, and particularly email, IM and the web, were going strong in the 90s and by the mid-2000s were absolutely dominant. This re-writing of history to suggest that AI deserves more runway to burn more hundreds of billions on things that will have no staying power like the trillion dollars in internet build out between '98-'08.
These things don't take time. The whole broadband internet took about a decade and about as much money as AI has gotten in this latest bubble. At least when the dot com bubble burst, we had infrastructure with staying power and real societal value, unlike AI which will leave us with next to nothing when its bubble pops.
If this is actually making software easier to make where is the software?
You are referencing a later period where regular folks were piling onto the internet and giving each other advice. And maybe that advice was geared towards novices and/or children, coming from other novices, and steeped in a lot of urban legend fears of the unknown. Even the academic types started to embrace these ideas, lending credence to the idea that their magical garden was now turning into hunting grounds.
But, imagine yourself in the early 90s, mapping your way (on paper, of course) to a trailer home in Silicon Valley or a secluded dirt road in the Santa Cruz mountains. You've never been there before, and your goal is to meet people you've never only seen as text on a screen. You assume they are real people but, technically, it could be all sock puppets.
I did those things! It wasn't a horror movie. Instead it lead to light socializing and drinking.
Maybe. What I remember is that the niche parts of the Internet remained niche. Everybody seemed excited about email, kids were excited about instant messaging, etc, and fundamentally that never changed; people have continued to use the Internet for communication. And the dotcom boom was a massive, stupid frenzy fueled by investors who had no idea what they were doing or what they were buying, and just wanted to make sure they didn't miss the next Microsoft, but the bubble was grounded in the accurate prediction that the web would become an essential part of business and an enormous money maker.
Does all of this sound right? Please correct me if I'm wrong. I find myself in the unexpected, uncomfortable position of wildly contradicting and undermining what I thought I understood: that people are terrible at predicting the future of technology.
Broadly speaking, in the 80s and 90s, the outlines of the future of technology was obvious and unmistakable: computers would become faster and more connected; that they'd take over or supplement a widening range of tasks and parts of our lives.
But it also can be hard to remember the way you understood the world in the past, or the way things were, because your understanding of the present overwrites it. Remember when VR was going to be huge? Remember when 3d chat was going to be the next big thing? Remember when TV was going to be hugely important in the classroom? Remember when social media was going to make our lives better?
People are making predictions regarding AI (or "AI") -- that it will do Everything Everywhere Very Soon, that it will lead to a 10x increase in worker output -- that strike me as absolutely, obviously wrong, even risible.
Thid I'd why it sucks that glass was shot down when that technology will soar in a few decades.
People need to be more forward looking and open to change imo. It affects society in very negative ways. Imagine if people had planned for social media when it was first becoming a thing. We'd have safety nets, and an actual fucking plan.
Maybe it's me having an extremely low imagination, but that stuff existed for a while in the shape of google lens and the various vision flavor of LLMs, and I must have used them.... 3 times in years, and not once did I think "Gosh I wish I could just ask a question aloud while walking in the street about this building and wait for the answer". It's either important enough that I want to see the wikipedia page straight from google maps and read the whole lot or not.
> an AI which can read text, recognize objects, and interact with the real world.
I can already do that pretty well with my eyeballs, and I don't need to worry about hallucinations, privacy, bad phone signal or my bad english accent. I get that is certainly an amazing tools for people with vision impairments, but that is not the market Meta/OpenAI are aiming for and forcefully trying to shove it into.
So yes, mayyybe if I am in a foreign country I could see a use but I usually want to get _away_ from technology on vacation. So I really don't see the point, but it seems that they believe I am the target audience?
I see. Perhaps your eyeballs missed the part where I said I'm blind?
The entire purpose of my comment was to push back against this idea that AI is stuck in 2022. It's weird and nonsensical and seems disingenuous, especially when I say "here are things I can do now that I couldn't do before" and the general response is "but I don't need to do those things!"
>>> I get that is certainly an amazing tools for people with vision impairments, but that is not the market Meta/OpenAI are aiming for and forcefully trying to shove it into.
I think anyone saying AI has no use is being willfully ignorant, but like every hype cycle before it since mobile (the last big paradigm shift), IMO it's going to result in a few useful applications and not the paradigm shift promised.
I think a charitable reading of this thread is simply: AI as a large technology leap is still developing a business case that can pay for all the hype. Not to mention it's operating cost.
been the impact of OpenAI and meta glasses / headsets on the blind community at large?
Based on your statements it seems that the real value of AI is increasing the participation rate of visually impaired people in the global workforce.
If Elon or Sam can convince governments and insurance companies to pay for AI-powered glasses as a healthcare necessity maybe there’s a pathway forward for AI and VC class after all.
…maybe that’s the real game plan for Marc Andreesen, Kanye, Elon and the others.
They’re not really Nazi’s just early adopters choosing the “innovative freedom” promised by Emperor Palpatine and the Sith over the slow march of the Senators of the Republic.
Sorry, I’ve been watching too much Andor.
I somehow agree with the op, that I don’t think I’m much closer to hiring chatGPT for a real job in 2025 than I was in 2022, but also you that there has been meaningful progress. And in particular, products that are transformative for disabled people are usually big improvements to the status quo for abled people too (oxo good grips being the classic example—transformative for people with arthritis, and generally just better for everybody else)
Every time I’ve encountered an AI first-line support agent I still find myself looking for the quickest escalation path to a real human just like before.
See, I always start with conversations about things I already know about, and they bullshit me enough that I'm wary of Gell-Mann Amnesia when asking them about things I don't know about. They output a lot of things that seem plausible but the way they blend fact and fiction with no regard for the truth keeps me extremely distrusting of them.
That is to say: after your conversation, did you ask for citations and go read the primary sources? Because if you did not, the model likely mislead you about Postgres in subtle ways.
Like "can a ps2 game use the ps1 hardware" it gave a noncommital, hallucinated answer. Then when asked to list sources it "searched the Internet" where all the links were from searches like "reddit ps2" etc.
There's been a stepwise jump in the capabilities of AI that's changed "products" from mostly fun to actually useful
Ironically, this typo is very likely a result of AI dictation making a mistake. There are a lot of common misspellings in English, like "their" and "there", but I've never seen a human confuse "pair" and "pear".
So yeah, there are cool demos you can do that you couldn't five years ago. But whether any of those cool demos actually translate into something useful in day-to-day life where the benefits outweigh the costs and risks is far from clear.
This certainly provides benefit to those with limited vision, which is great. But that is a very small segment of consumers. Besides those, how many other people do you know who are actually _using these glasses_ in the real world?
Google Glass came out 10 years ago.
does it? An anecdote from yesterday: My wife was asked to do a lit review as an assignment for nursing school. Her professor sent her an example of papers on the topic, with a brief "relevance" summary for each. My wife asked me for help as she was frustrated she couldn't find any of the referenced papers online (she's not the most adept at technology and figured she was doing something wrong). I took one look at the email from her professor and could tell just by the formatting that it was LLM generated (which model, I don't know, but obviously a 2025 model). The professor didn't say anything about using an LLM, and my wife didn't suspect that might be the case.
My wife and I did some Google Scholar searches, and _every_ _single_ _one_ of the 5 papers cited did _not_ exist. In 2 of the cases, similar papers did exist, but with different authors or a different title that resembled the fake "citation". The three others did not exist in any form - there were other papers on the same subject, sure, but nothing closely resembling the "citations" either in terms of authorship or title.
For a long time Silicon Valley tried to avoid politics, or at least that was the vibe of everyone. But even in the 1980s, when Japan was rising and threatening to outcompete the US semiconductor industry, Captains of Industry that had been former strident libertarians went hat in hand to Uncle Sam to tilt the field to favor their survival.
"avoid politics" was true for some stripes of participants and completely not true for others.. It is true IMHO that consumer gear generated a lot of success and was largely apolitical.
The Nazis very happily and intentionally worked together with corporations and the corporations were happy to exploit the free slave labor and lack of competition.
Are you just using "fascist dictatorships" as a generic label for things you don't like? The things you've listed might be bad, but they're neither dictatorial nor fascist. It's even questionable whether some of the things are even bad. Don't we all try to minimize our tax burden? Is there anyone out there who refuse tax credits because that means "paying less taxes"?
That has been the pattern for the last 10 years.
> The things you've listed might be bad, but they're neither dictatorial nor fascist.
Uhh I'm pretty sure that CEOs/executives act very similar to dictators. Large companies certainly don't act like democracies. Companies often employ many forms of totalitarian control used by fascist dictatorships. There's often mass surveillance (mouse trackers, email auditing, etc), suppression of speech, suppression of opposition, fear of termination, cult of personality.
The tax stuff is irrelevant imo though
Where does employment/voluntary association end and "fascist dictator" begin? If you're being paid for your time, it's only fair that whoever's paying you can monitor your work and decide what you're doing. I agree that some businesses go beyond this and try to regulate what you do outside of work, but it's a stretch to make a broad claim like "businesses are tiny little fascist dictatorships". That makes as much sense as "governments are tiny little fascist dictatorships", just because some of them are authoritarian.
> The report somehow fails to mention the bit where the Silicon Valley VC and executive crowd worked their backsides off to elect Trump and several of them sat in the front row at his inauguration. Then they were actually surprised when the leopard ate their faces too.
They vibe with Trump because they have the same training, and they've done very little actual democratic governance. Very little thinking about the common good. You can argue most companies are actually more like benign dictatorships, but that's irrelevant.
To be fair I'm often a fan of markets, but not when the companies are monopolies larger than most nation states, actively increasing inequality and fighting counters like regulation/unions, not to mention affecting elections like fb/musk. In that case it's not voluntary. Wikipedia has an entire section on market failures https://en.wikipedia.org/wiki/Market_failure
All I am saying is that there certainly are similarities between the way fascist governments and large corporations, not that they are the same thing.
Based on your response, it sounds like you agree that companies often act in an authoritarian manner, its just that you think it is justified in some way.
To be clear, I am not making a value statement here, I am just pointing out similarities between two systems. I don't claim to have better systems for managing corporations. Tbh, I wouldnt want the majority of my coworkers calling the shots and if I was CEO, I would work to consolidate power
I disagree. It is authoritarian to assume ownership over someone's body. It doesn't matter how much you've paid. You cannot compel someone to labor.
Strong unions are another alternative to totalitarian control of companies. Not ideal, but there are plenty of examples throughout history.
I'm not claiming these alternatives are better or worse, I'm just pointing out that other systems are possible and already exist.
Fwiw, whenever my team has done democratic planning it has always led to bad outcomes
One member one vote doesn't seem very imaginative.
Compared to a dictator a focused team effort will have better results but a set of people who don't care or have an overly limited grasp of the topic won't do well. This probably doesn't matter to much if things are going well.
I fool around with the concept of department specific voting certificates with each component of the department written into its own "law" that one can vote yes/no and remove on. Each cert adding weight to the vote. People writing the rules are elected by the same mechanic. To activate a rule or board member it needs 55% "yes" to deactivate it needs 55% "no" and to remove it needs 65%.
One can participate in all departments and each certificate comes with a small pay raise.
Wall Street is built on looting the pensions of retirees.
The American economy has always been “already absurdly rich person takes 9 of the widgets worker produced and leaves workers 1 to share.”
In the money printer era this allowed big salaries for some compared to the norm but the rich still took their 9 too.
We wrapped mafia like thug behavior of the pre and post war world into empty semantics.
The only option for unlocking a massive amount of liquidity for the public to stabilize their individual situations is taxing the rich.
A rich man, a blue-collar worker, and an immigrant are sitting around a table. In the center of the table is a plate of a dozen cookies. The rich man takes 11 of them, then leans in and whispers to the blue-collar worker "hey, I think that guy wants your cookie."
All our best thinkers, the propaganda goes. Their best idea for the economy is submit the status quo.
Their biology lives the experience every day that as a knowledge worker with little real world skill at providing for themselves, it’s actually they who are the most easily manipulated and kowtowed.
VC money is in a constant state of FOMO, this is nothing new. Companies dress up as AI, or web3 or web2, or fintech or what ever to more easily attract capital. If 57.9% of dollars went to AI startups this year is not because everything is AI, I would bet 25% are just companies that tacked AI on to an unrelated business model and its skewing the statistics. I'll promise you that 10 years from now 57.9% of VC funding is going to be in some other buzzword and its not going to be AI.
If you can raise money, at a low cost, which allows you to grow NOW, as opposed to grow later ? All things being equal, this is the better choice.
NB: This is a toy model, terms and conditions apply.
Don't change HN.
Venture capital is definitely not an efficient market. But I’m not sure what your point is.
When you need a dash of convincing b******t, they are excellent generators.
I have seen way too many products suddenly becoming ProductX to now ProductX-AI by simply adding a RAG powered document conversation popup.
There is no way both of those are even remotely true.
My story is that OpenStack didn't really work out but, if you were serious about the cloud thing, you sort of had to hop aboard even if, in general, the landscape ended up playing out differently with containers.
Broadcom is to thank for that.
And yeah probably a sizable way from VMware but not ready for containers momentum which is ironically not a small part of initial VMware momentum from physical servers.
Many went to Nutanix for virtualization and containerization, but many shops will stay virtualization-only for security and governance reasons for a long time.
Many of these people are still running the Trump Administration from the shadows. Elon Musk turns out to just be one example of the billionaire to right wing brainrot lunatic infohazard funnel.
It mentions the Semafor story. Here's Adam Conover covering that - https://youtu.be/3_PKKUFxRyk?feature=shared
To me I think VC's figured out a way to market a very specific way to build companies and convinced a lot of people it's the only way for 20-ish years. Then there was this sort of shift to selling to enterprise, I think because B2C got harder and easy money was the goal. By then a lot of enterprise design makers were probably in the networks of the people selling. There's a meme about YC-of-late being mostly companies that sell shovels to each other.
But when you optimize for enterprise, I think you end up losing a lot of diversity of opinion in where the value comes from, which leads to top-heavy companies.
My main issue is that after the ZIRP era I don't believe the money is gone or unavailable. It just seems to be hoarded for some reason. There is astronomical wealth out there that could be used for trying new economic models that compete with the last generation of VCs. But it isn't happening.
Maybe the next era of VC decision makers, the ones who themselves were funded on big bets, just don't have the same appetite for risk? Or maybe the era of "developing your brand" has made them not want to share their success? I'm not sure but it's weird to me.
AI actually seems like a great fit for the VC business model, much more so than most SaaS companies are. Successes are likely to make a ton of money and they can't self finance or finance with debt because they need to spend a huge amount of money.
Sure, they would prefer to make money through carry, but the management fee is a nice downside protection.
Most funds have management fees in the 1-2% range and a carry at around 20%. VC is a power curve, where a couple of large funds have an outsized impact.
And if a fund or VC (from associate to partner) cannot deliver, your career in the space is basically over.
Quick example : a company founded in the last three or four months that provides appointment setting and calendar management for a single healthcare vertical. They are already profitable. There are at least 5,000 market verticals like this in the US alone.
Tech to provide this is going to keep commodifying and that will leave early entrants as wealthy incumbents; journalism telling people to be certain that the opposite will happen is borderline irresponsible and certainly missing the situation full stop.
Am I missing anything, what's the differentiation
But instead, it's about specializing your product to the needs of specific customers / finding a niche and then provide exactly the service they desire. How is that in any way vertical?
Etymology is strange
Like, I thought the whole point with VC was to find ideas that could 100x your return, which by definition would be horizontal markets.
For an alternative take check out say the FT "AI frenzy leads US venture capital to biggest splurge in three years" https://news.ycombinator.com/item?id=40928248
(for amusement here he is going on about bitcoin being a pump and dump when it was $16 in 2011 https://newstechnica.com/2011/06/18/bitcoin-to-revolutionise...)
https://nvca.org/wp-content/uploads/2025/04/Q1-2025-PitchBoo...
Anyone even close to the space will tell you there are a million tiny AI-adjacent startups getting funded right now. Which if anyone is in one and looking for an early hire engineer please do reach out, I have the background but haven't found anything in my network yet.
The problem is, scaling was ALWAYS the hard part. at a certain level, you don't have to worry about sharding and replicating databases, moving over to NoSQL, async race conditions, etc. etc. Why bet the house on one business idea, when you can have 10 "Micro-SaaS's" that are all bootstrapped but might make 10-20k in MRR.
In the day and age where the average business person has like 20-30 subscriptions for random tools, emails, websites, marketing, email lists, automations, SaaS products, freelancers, etc. it very much lends itself to the micro model.
The 'VC' business model is starting to break down. Just by looking around youtube and Indie Hackers, most of the successful businesses now adays are bootstrapped where the founder has some kind of community where they blog, youtube, have a patreon, X, etc.. They become the brand and they have no use for VC's. As soon as they launch a new app idea, they have 200K people on twitter, 150k people on youtube that will atleast give the app a look.
https://techcrunch.com/2011/06/20/dropbox-security-bug-made-...
https://www.macrumors.com/how-to/temporarily-fix-macos-high-...
It's a nightmare out there.
You are the exact people that i am trying to avoid with this model. I'm not trying to make big deals with big companies who can be impacted by security. The Micro-SaaS model requires that when i get a client asking me those kind of questions, i run from them and tell them my tools probably not for you. Any app that requires sensitive data transfering shouldn't be done on the micro-saas model.
Micro-Saas requires small, simple tools that may be low-hanging fruit. Sometimes they aren't micro-Saas's, but just random tools that make money for you by creating a glorified Open AI wrapper and a bunch of integrations. Honestly, alot of the tools I see that make money for people are made on Make or replit. No code even required but definitly not going after the "we need sensitive info or PII" market.
All payments just go through their respective provider so not really a risk there too.
Can I apply for YC again and get my annual rejection? So I can cry upper middle class tears.
I really need a business partner to keep me focused on features people actually want.
But my main business friend is focused on much more important things ( raising a new family) now.
Thinking about what's more important right now, making some games I know will make no money.
Creating a B2B startup that will also make no money.
They'd pedal FOMO and would promise eldorado to everyone who joins. "AI will do everything for you, you'll be fired."
Now I see them change the tone. It's like: "c'mon, it's a special tool, you need to use it properly, and give it what it needs."
They sell quickly made courses. Same guys who in '12 would advertize "Mobile strategies" consulting (remember that thing here on HN?), then AR with Google glass in '14, then crypto in '17, then web3, and so on.
I'm curious how centralized the operation is. Individually it's a bunch of hustlers running their own little personal branding operation, but if each of them is in a masterclass, and the masterclass leaders are in a masterclass, has a small group of mega-influencers formed, and who are they?
"of course, AI is just another tool, and has its niche" :D
Yes many AI companies will go to zero. This is how every tech bubble works and innovation happens by ppl trying stuff and mostly failing. But in the end the survivors will remake the world for the better. Very sad to see this sort of drivel being popular here.
I’m not exactly sure how prevalent this mindset is, but I’ve talked with lots of founders within the past couple of years and I’ve encountered this mindset a lot, which, to me, is a huge contrast to the mindset I encountered, e.g., about ten years ago when I was a young founder and I was raising a small round (mostly from angels)—back then, it seemed like every single founder was chasing venture capital non-stop and usually was already thinking about their next round before even closing their current round.
If this mindset is (or becomes) prevalent, unless a lot of these startups are quickly acquired for large sums, is venture capital, as it currently exists, ready to deal with this shift?
[1] Implying that they blindly follow anything and everything. The origin of this metaphor has since been debunked, but the metaphor itself lives on. https://www.adfg.alaska.gov/index.cfm?adfg=wildlifenews.view...
Aside: I shed a few tears reading the article, since the death of n-gate.com, nothing like that existed until today I found pivot-to-ai.com.
>Just as “internet” evolved from buzzword to business backbone, AI is following the same playbook.
>No it isn’t! ... Stop saying dumb things!
Meanwhile the tech is cracking along https://x.com/waitbutwhy/status/1919870578502021257
jmclnx•17h ago
mistrial9•17h ago
see Thomas Piketty .. this will get worse before it gets better
cadamsdotcom•11h ago