frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Synthetic Chromatophores for Color and Pattern Morphing Skins

https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.202505104
1•PaulHoule•55s ago•0 comments

Cluely filed a DMCA takedown for tweet about their system prompt

https://twitter.com/jackhcable/status/1942636823525679182
1•taytus•1m ago•0 comments

Words Don't Compile

https://blog.surkar.in/words-dont-compile
1•manthan1674•1m ago•0 comments

Facial recognition cameras could be introduced to tackle fare dodging on Tube

https://www.standard.co.uk/news/transport/facial-recognition-cameras-fare-dodging-tube-london-underground-tfl-b1237049.html
1•pseudolus•1m ago•0 comments

Dynamical origin of Theia, the last giant impactor on Earth

https://arxiv.org/abs/2507.01826
1•bikenaga•2m ago•0 comments

Judge rules that VMware must support crucial Dutch government agency migration

https://www.theregister.com/2025/06/30/dutch_agency_wins_right_to/
1•Logans_Run•3m ago•0 comments

Skia Graphite: Chrome's rasterization back end for the future

https://blog.chromium.org/2025/07/introducing-skia-graphite-chromes.html
2•ingve•4m ago•0 comments

Google's Moonshot Project Gears Up for Human Trail of AI-Designed Drugs

https://in.mashable.com/science/96798/googles-secret-moonshot-project-gears-up-for-human-trail-of-ai-designed-drugs
1•Bluestein•4m ago•0 comments

What Gets Measured, AI Will Automate

https://hbr.org/2025/06/what-gets-measured-ai-will-automate
1•Michelangelo11•4m ago•0 comments

June.so Acquired by Amplitude

https://www.june.so/blog/a-new-chapter
1•camjw•4m ago•0 comments

In Hiroshima, search for remains keeps war alive for lone volunteer

https://www.reuters.com/world/hiroshima-search-remains-keeps-war-alive-lone-volunteer-2025-07-08/
1•speckx•6m ago•0 comments

All living NASA science chiefs unite in opposition to unprecedented budget cuts

https://www.planetary.org/press-releases/nasa-science-chiefs-letter-press-release
2•consumer451•7m ago•0 comments

Thunderbird 140

https://www.thunderbird.net/en-US/thunderbird/140.0/releasenotes/
2•doener•8m ago•0 comments

Show HN: Vibes – Discover music through human stories, not algorithms

https://sharevibes.app/
2•lucascliberato•9m ago•1 comments

Framework 12 Platform Tuning for Better Performance or Power Efficiency

https://www.phoronix.com/review/framework-12-performance
2•doener•9m ago•1 comments

Mastodon's latest update readies the app for Quote Posts

https://techcrunch.com/2025/07/08/mastodons-latest-update-readies-the-app-for-quote-posts-revamps-design/
1•doener•9m ago•0 comments

What if the moon turned into a black hole? [Xkcd's What If?] [video]

https://www.youtube.com/watch?v=UQgw50GQu1A
1•nfriedly•9m ago•0 comments

Brut: A New Web Framework for Ruby

https://naildrivin5.com/blog/2025/07/08/brut-a-new-web-framework-for-ruby.html
6•onnnon•10m ago•0 comments

We're testing a way to auto-update docs from Slack/Zoom/email. Thoughts?

https://getautobase.com/
1•ElfDragon11•10m ago•1 comments

Rooktook.com – daily chess tournament app

1•shubhamrrawal•15m ago•0 comments

Copy/paste text to highlight AI writing patterns like "It's not X. It's Y"

https://unaiify.com/
1•justinowings•15m ago•2 comments

Show HN: A simple business management tool for small business owners

https://github.com/oitcode/samarium
1•azaz12•19m ago•0 comments

Mount Rainier Currently Experiencing an Earthquake Swarm

https://volcanoes.usgs.gov/hans-public/notice/DOI-USGS-CVO-2025-07-08T14%3A41%3A41%2B00%3A00
7•jandrewrogers•20m ago•0 comments

Amazon asked corporate employees to help fulfill deliveries for Prime Day

https://www.engadget.com/big-tech/amazon-asked-corporate-employees-to-help-fulfill-grocery-deliveries-for-prime-day-131022042.html
2•bartekrutkowski•20m ago•0 comments

LLM-Ready Training Dataset for Apple's Foundation Models (iOS 26)

https://rileyhealth.gumroad.com/l/bwoqe
1•rileygersh•21m ago•0 comments

Announcing TypeScript 5.9 Beta

https://devblogs.microsoft.com/typescript/announcing-typescript-5-9-beta/
1•zackify•21m ago•0 comments

Show HN: We built a search engine to find properties by physical condition

https://www.casafy.ai/
1•jbuchananr•22m ago•1 comments

CVE-2025-48384: Breaking Git with a carriage return and cloning RCE

https://dgl.cx/2025/07/git-clone-submodule-cve-2025-48384
60•dgl•25m ago•2 comments

Supabase MCP leaks your entire SQL Database, a lethal trifecta attack

https://simonwillison.net/2025/Jul/6/supabase-mcp-lethal-trifecta/
7•rexpository•26m ago•0 comments

What goes wrong when we write ghazals in English

https://www.theparisreview.org/blog/2025/06/24/what-goes-wrong-when-we-write-ghazals-in-english/
1•ishita159•26m ago•0 comments
Open in hackernews

Blind to Disruption – The CEOs Who Missed the Future

https://steveblank.com/2025/07/08/blind-to-disruption-the-ceos-who-missed-the-future/
48•ArmageddonIt•4h ago

Comments

johnea•3h ago
The article seemed more apropos to the US automobile industry than SaaS.
johncole•3h ago
> Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad.

I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.

bluefirebrand•2h ago
I'm not sure I agree much

Cynically, there's no difference from a CEO's perspective between a human employee and a horse

They are both expenses that the CEO would probably prefer to do without whenever possible. A line item on a balance sheet, nothing more

johncole•2h ago
I think ceos that think this way are a self fulfilling prophecy of doom. If they think of their employees as cogs that can be replaced, they get cogs that can be replaced.
nemomarx•1h ago
Isn't this good for the CEO? if your employees aren't cogs then what do you do if they leave? the more replaceable they are the better bargaining power you have as a capitalist right
johncole•1h ago
If you have all cogs, the scope of your business is almost always local. You’re running a lawn mowing business or a subway. And I’m not denigrating those businesses just making the point that they’re not the bulk of the economy. If you’re running a serious business part of your business may be cogs but there’s a very important layer of non cogs that you spend most of your time recruiting, keeping, and guiding. These folks are irreplaceable.
bluefirebrand•1h ago
Doesn't matter

The median CEO salary is in the millions, they do not have to ever worry about money again if they can just stick around for one CEO gig for a couple of years

Granted, people who become CEOs are not likely to think this way

But the fact is that when people have so much money they could retire immediately with no consequences, they are basically impossible for a business to hold accountable outside of actual illegal activity

And let's be real. Often it's difficult to even hold them accountable for actual illegal activity too

johncole•1h ago
If you’re playing at that level, you’re not thinking about subsistence living and never having to work again. You are driven by ego, by winning, by legacy. All three incentivize you to do well if your board consists of non-asshats. You are playing a multi-shot game.
WillAdams•2h ago
Moreover, there was at least one company which did pivot --- the Chevy Malibu station wagon my family owned in the mid-70s had a badge on the door openings:

>Body by Fisher

which had an image of the carriages which they had previously made.

1oooqooq•3h ago
nice article, but then end with the brain dead "jump on [current fad]".

If this was published a few months ago, it would be telling everyone to jump into web3.

johncole•2h ago
Has that ended well?
nailer•2h ago
HN (not YC, who readily invest in blockchain companies) are usually about a decade out regarding blockchain knowledge. Paying 2-6% of all your transactions to intermediaries of varying value-add may seem sensible to you. That's fine.
graemep•2h ago
Credit cards are not the only alternative to crypto currencies.

My bank transfers within the country cost me nothing to send or receive, for example.

bryanlarsen•2h ago
Merchants aren't the customer target for credit cards, consumers are. Credit card payments are reversible and provide a reward. There are lots of options available that are better for merchants than credit cards (cash, debit cards, transfers, etc). But they all lose because the consumer prefers credit cards.
nailer•2h ago
Yes, that's the varying value-add mentioned in the comment you're replying to. I pay 3.5% of every card transaction to Square. I don't get 3.5% cash/rewards back.
bpt3•1h ago
Do you get a discount for paying with cash (or blockchain)? In general the answer is no, meaning you aren't paying the 3.5% transaction fee, the merchant is.
SoftTalker•1h ago
Cash isn't really great for merchants. You have to handle it, safeguard it, count it, get it to the bank. Many hands are involved that process and theft or loss can occur by any of them or by robbery/burglary. I don't know if it's a break-even on payment card fees but I bet it is close.
massysett•2h ago
Yes, would have been a much better article if it told us how to be sure AI is the next automobile and that AI is not the next augmented reality, metaverse, blockchain, Segway, or fill-in-your-favorite-fad.
kylecazar•2h ago
I like the historical part of this article, but the current problem is the reverse.

Everyone is jumping on the AI train and forgetting the fundamentals.

nofriend•2h ago
AI will plausibly disrupt everything
jorl17•2h ago
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?

The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!

Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.

From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!

I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!

DeepSeaTortoise•2h ago
People dislike the unreliability and not being able to reason about potential failure scenarios.

Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?

jorl17•1h ago
I mostly agree with all your points being issues, I just don't see them as roadblocks to the future I mentioned, nor do I find them issues without solutions or workarounds.

Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).

> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.

With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).

I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?

scottLobster•2h ago
So would a universal cancer vaccine, but no one is acting like it's just around the corner.

I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.

AI as currently marketed is just that with an LLM chatbot.

sz4kerto•2h ago
I definitely don't think so. You're seeing companies who have a lot of publicity on the internet. There are tons of very successful SMBs who have no real idea of what to do with AI, and they're not jumping on it at all. They're at risk.
MangoToupe•2h ago
> They're at risk.

They're at risk of what? It's easy to hand-wave about disruption, but where's the beef?

dingnuts•2h ago
at risk of getting all my business because the big companies think I want to talk to a bot instead of a person lol
jayd16•2h ago
It's only a risk if there's a moat. What's the moat for jumping in early?
nailer•2h ago
From the article:

_____

The first cars were:

- Loud and unreliable

- Expensive and hard to repair

- Starved for fuel in a world with no gas stations

- Unsuitable for the dirt roads of rural America

_____

Reminds me of Linux in the late 90s. Talking to Solaris, HPUX or NT4 advocates, many were sure Linux was not going to succeed because:

- It didn't support multiple processors

- There was nobody to pay for commercial support

- It didn't support the POSIX standard

WillAdams•2h ago
>- Starved for fuel in a world with no gas stations

Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.

>- Unsuitable for the dirt roads of rural America

but the process of improving roads for the new-fangled bicycle was well underway.

bpt3•1h ago
Linux won on cost once it was "good enough". AI isn't free (by any definition of free) and is a long way away from "good enough" to be a general replacement for the status quo in a lot of domains.

The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.

danbruc•2h ago
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.

[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.

Davidzheng•2h ago
Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.
danbruc•2h ago
I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.
Davidzheng•34m ago
Current generation of LLMs have very limited ability to learn new skills at inference time. I disagree this means they cannot reason. I think reasoning is by an large a skill which can be taught at training time.
danbruc•2m ago
Do you have an example of some reasoning ability any of the large language models has learned? Or do you just mean that you think, we could train them in principle?
jayd16•2h ago
Something can be much better than before but still be a dead end. Literally a dead end road can take you closer but never get you there.
Davidzheng•54m ago
But dead end to what? All progress eventually plateaus somewhere? It's clearly insanely useful in practice. And do you think there will be any future AGI whose development is not helped by current LLM technology? Even if the architecture is completely different the ability of LLMs to understand humans data automatically is unparalleled.
danbruc•6m ago
To reaching AI that can reason. And sure, as I wrote, large language models might become a relevant component for processing natural language inputs and outputs, but I do not see a path towards large language models becoming able to reason without some fundamentally new ideas. At the moment we try to paper over this deficit by giving large language model access to all kind of external tools like search engines, compilers, theorem provers, and so on.
myrmidon•2h ago
> The current generation of AI models will turn out to be essentially a dead end.

It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".

To give some pause before dismissing the current state of the art prematurely:

I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.

I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.

danbruc•1h ago
I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.
myrmidon•1h ago
I'm not sure if popular models cheat at this, but if I ask for it (o3-mini) I get correct results/intermediate values (for 794206 * 43124, chosen randomly).

I do suspect this is only achieveable because the model was specifically trained for this.

But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.

I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.

> in a strong version of this test I would want nothing related to long multiplication in the training data.

Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.

danbruc•41m ago
I just tested it with Copilot with two random 45 digit numbers and it gets it correct by translating it into Python and running it in the background. When I asked to not use any external tools, it got the first five, the last two, and a hand full more digits in the middle correct, out of 90. It also fails to calculate the 45 intermediate products - one number times one digit from the other - including multiplying by zero and one.

The models can do surprisingly large numbers correctly, but they essentially memorized them. As you make the numbers longer and longer, the result becomes garbage. If they would actually reason about it, this would not happen, multiplying those long numbers is not really harder than multiplying two digit numbers, just more time consuming and annoying.

And I do not want the model to figure multiplication out on its own, I want to provide it with what teachers tell children until they get to long multiplication. The only thing where I want to push the AI is to do it for much longer numbers, not only two, three, four digits or whatever you do in primary school.

And the difference is not only in online vs offline, large language models have almost certainly been trained on heaps of basic mathematics, but did not learn to multiply. They can explain to you how to do it because they have seen countless explanation and examples, but they can not actually do it themselves.

jmathai•36m ago
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.

Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).

danbruc•17m ago
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.

When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.

seeknotfind•2h ago
Innovators Dilemma, mentioned here, is great. If you enjoyed this article, don't overlook that recommendation.
goalieca•2h ago
History is full of examples of execs hedging on the wrong technology, arriving too early, etc.
baggachipz•2h ago
"We're all in on Blockchain! We're all in on VR! We're all in on self-driving! We're all in on NoSQL! We're all in on 3D printing!" The Gardner Hype Cycle is alive and well.
jampa•2h ago
I like Steve's content, but the ending misses the mark.

With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.

I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.

This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.

The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."

[1]: https://www.aboutamazon.com/news/company-news/2016-letter-to...

jayd16•2h ago
It may be true but Bezos' comment is also classic smoke blowing. "Oh well you can't see us using <newest hype machine> or quantify it's success but it's certainly in everything we do!"
anon7000•2h ago
But it’s completely true — Amazon undoubtedly has a pretty advanced logistics set up and certainly uses AI all over the place. Even if they’re not a big AI researcher.

There are a lot of great use cases for ML outside of chatbots

NitpickLawyer•2h ago
> The best AI applications are beneath the surface to empower users

Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.

The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!

You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.

jvanderbot•2h ago
I can strain the analogy just enough to get something useful from it.

If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.

It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.

I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?

This SE q implies there was some transition rather than chaos.

https://history.stackexchange.com/questions/46866/did-any-ca...

Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.

ryanrasti•2h ago
> With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.

Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.

I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:

1. Massive dose of FOMO from leadership terrified of falling behind

2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product

baxtr•1h ago
Just today I used the AI service on the amazon product page to get more information about a specific product, basically RAG on the reviews.

So maybe your analysis is outdated?

justinrubek•1h ago
The amazon store chatbot is mongst the worst implementations I've seen. The old UI which displayed the customer questions and allowed searching them was infinitely better.
geoka9•50m ago
FWIW, the old UI (which I agree is better) is still available. Once the "AI search" is done, there's a dropdown you can click and it will show all the reviews that include the word you searched.
dmbche•2h ago
Great read!

I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.

But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.

How many of the 3999 companies that didn't acutally had any capacity to do so?

Is it really a lesson in divining the future, or more survivorship bias?

bryanlarsen•2h ago
Agreed. The autombile was two innovations, not one. If Ford had created a carriage assembly line in an alternate history without automobiles, how many carriage makers would he have put out of business? The United States certainly couldn't have supported 4000 carriage assembly lines. Most of those carriage makers did not have the capacity or volume to finance and support an assembly line.
takklz•2h ago
I've listened to so many CEOs in various industries (not just tech) salivating at the potential ability to cutout the software engineering middle man to make their ideas come to life (from PMs, to Engineers, to Managers, etc.). They truly believe the AI revolution is going to make them god's gift to the world.

I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.

Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.

mxfh•1h ago
The historical part completely misses the first boom of EV from 1890s to 1910s besides mentioning that they existed.

The history of those is the big untold story here.

It doesn't help if you're betting on the right tech too early.

Clearly superior in theory, but lacking significant breakthroughs in battery reasearch and general spottyness of electrification in that era.

Tons of Electric Vehicle companies existed to promote that comparably tech.

Instead the handful of combustion engine companies drove everyone else out of the market eventually, not last gasoline was marketed as more manly.

https://www.theguardian.com/technology/2021/aug/03/lost-hist...

SoftTalker•1h ago
Yep. Too early is as bad as too late. The EV was invented but the supporting technology wasn't there.

Lots of ideas that failed in the first dotcom boom in the late 1990s are popular and successful today but weren't able to find a market at the time.