I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
Cynically, there's no difference from a CEO's perspective between a human employee and a horse
They are both expenses that the CEO would probably prefer to do without whenever possible. A line item on a balance sheet, nothing more
The median CEO salary is in the millions, they do not have to ever worry about money again if they can just stick around for one CEO gig for a couple of years
Granted, people who become CEOs are not likely to think this way
But the fact is that when people have so much money they could retire immediately with no consequences, they are basically impossible for a business to hold accountable outside of actual illegal activity
And let's be real. Often it's difficult to even hold them accountable for actual illegal activity too
Incentives for CEOs and Executives are just way different, which is actually a huge part of the problem we face in society
We are run into the ground for profit by people who think the purpose of life is to profit
>Body by Fisher
which had an image of the carriages which they had previously made.
If this was published a few months ago, it would be telling everyone to jump into web3.
My bank transfers within the country cost me nothing to send or receive, for example.
Everyone is jumping on the AI train and forgetting the fundamentals.
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?
Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).
> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.
I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.
With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).
I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
They're at risk of what? It's easy to hand-wave about disruption, but where's the beef?
_____
The first cars were:
- Loud and unreliable
- Expensive and hard to repair
- Starved for fuel in a world with no gas stations
- Unsuitable for the dirt roads of rural America
_____
Reminds me of Linux in the late 90s. Talking to Solaris, HPUX or NT4 advocates, many were sure Linux was not going to succeed because:
- It didn't support multiple processors
- There was nobody to pay for commercial support
- It didn't support the POSIX standard
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I do suspect this is only achieveable because the model was specifically trained for this.
But the same is true for humans; children can't really "reason themselves" into basic arithmetic-- that's a skill that requires considerable training.
I do concede that this (learning/skill aquisition) is something that humans can do "online" (within days/weeks/months) while LLMs need a separate process for it.
> in a strong version of this test I would want nothing related to long multiplication in the training data.
Is this not a bit of a double standard? I think at least 99/100 humans with minimal previous math exposure would utterly fail this test.
The models can do surprisingly large numbers correctly, but they essentially memorized them. As you make the numbers longer and longer, the result becomes garbage. If they would actually reason about it, this would not happen, multiplying those long numbers is not really harder than multiplying two digit numbers, just more time consuming and annoying.
And I do not want the model to figure multiplication out on its own, I want to provide it with what teachers tell children until they get to long multiplication. The only thing where I want to push the AI is to do it for much longer numbers, not only two, three, four digits or whatever you do in primary school.
And the difference is not only in online vs offline, large language models have almost certainly been trained on heaps of basic mathematics, but did not learn to multiply. They can explain to you how to do it because they have seen countless explanation and examples, but they can not actually do it themselves.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
[1]: https://www.aboutamazon.com/news/company-news/2016-letter-to...
There are a lot of great use cases for ML outside of chatbots
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
https://history.stackexchange.com/questions/46866/did-any-ca...
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
So maybe your analysis is outdated?
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
The history of those is the big untold story here.
It doesn't help if you're betting on the right tech too early.
Clearly superior in theory, but lacking significant breakthroughs in battery reasearch and general spottyness of electrification in that era.
Tons of Electric Vehicle companies existed to promote that comparably tech.
Instead the handful of combustion engine companies drove everyone else out of the market eventually, not last gasoline was marketed as more manly.
https://www.theguardian.com/technology/2021/aug/03/lost-hist...
Lots of ideas that failed in the first dotcom boom in the late 1990s are popular and successful today but weren't able to find a market at the time.
johnea•5h ago