We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc
[0] https://news.ycombinator.com/item?id=44207603"AI is things we currently think are hard to do with computers"
"AI is things that are currently easier for humans than computers".
Although most financiers avoided "artificial intelligence" firms in the early 1990s, several successful firms have utilized core AI technologies into their products. They may call them intelligence applications or knowledge management systems, or they may focus on the solution, such as customer relationship management, like Pegasystems, or email management, as in the case of Kana Communications. The former expert systems companies, described in Table 6.1, are mostly applying their expert system technology to a particular area, such as network management or electronic customer service. All of these firms today show promise in providing solutions to real problems.
In other words, once a product robustly solves a real customer problem, it is no longer thought of as "AI," despite utilizing technologies commonly thought of as "artificial intelligence" in their contemporary eras (e.g. expert systems in the 80s/90s, statistical machine learning in the 2000s, artificial neural nets in the 2010s onwards). Today, nobody thinks of expert systems as AI; it's just a decision tree. A kernel support vector machine is just a supervised binary classifier. And so on.The article is re-evaluating that prior reality, but it isn’t making the point that successful AI stops being considered AI. In the part you quote, it’s merely pointing out that AI technology isn’t always marketed as such, due to the negative connotation “AI” had acquired.
Certain adaptive cruise control systems certainly are considered AI (e.g. ones that utilize cameras for emergency braking or lane-keep assist).
The line can be fuzzy — for instance, are solvers of optimization problems in operations research "AI"? If you told people in the 1930s that computers would be used in a decade by shipping companies to optimally schedule and route packages or by militaries to organize wartime logistics at a massive scale, many would certainly have considered that some form of intelligence.
So you’re right that some of these things aren’t AI now. But they were called that at the start of development.
And every AI product in existence is the same. Map navigation, search engine ranking, even register allocation and query planning.
Thus they are not AI, they're algorithms.
The frontier is constantly moving.
> All sciences are just highly abstracted physics
Where is the line where they stop being algorithms?
When people think of AI they think of robots that can think like us. That can solve arbitrary problems, plan for the future, logically reason, etc. In an autonomous fashion.
That's always been true. So the goal posts haven't really moved, instead it's a continuous cycle of hype, understanding, disappointment, and acceptance. Every time a computer exhibits a new capability that's human-like, like recognizing faces, we wonder if this is what the start of AGI looks like, and unfortunately that's not been the case so far.
AGI is what people think of when they hear AI. AI is a bastardized term that people use to either justify, hype and/or sell their research, business or products.
The reason "AI" stops being AI once it becomes mainstream is that people figure out that it's not AI once they see the limitations of whatever the latest iteration is.
The author of that whinge thinks that what we all wanted from Artificial Intelligence all along was a HAAR cascade or a chess min-maxer, that was the dream all along? The author thinks that talking intelligence any more is what, “unfair” now? What are they even whining about?
Because the computers of yesteryear were slow enough that winning a simple board game was their limit, you can’t talk about what’s next!
And thats to put aside the face recognition that Google put out which classified dark skinned humans as gorillas, not because it was making a value judgement about race but because it has no understanding of the picture or the text label. Or the product recommendation engines which recommend another hundred refrigerators after you just bought one, and the engineers on the internet who defend that by saying it genuinely is the most effective advert to show, and calling those systems “intelligent” just because they are new. Putting a radar on a car lets it follow the car in front at a distance because there is a computer to connect the radar, engine, and brakes and not because the car has gained an understanding of what distance and crashing are.
A hundred years ago tap water was a luxury. Fifty years ago drinkable tap water was a luxury. Do we constantly have to keep hearing that we can’t call anything today a “luxury” because in the past “luxury” was achieved already?
The moving goalposts come from those hyping up each phase of AI as AGI being right around the corner, and then they get pushback on that.
An "artificial intelligence" is no more intelligent than an "artificial flower" is a flower. Making it into a more convincing simulacrum, or expanding the range of roles where it can adequately substitute for the real thing (or even vastly outperform the real thing), is not reifying it. Thankfully, we don't make the same linguistic mistake with "artificial sweeteners"; they do in fact sweeten, but I would have the same complaint if we said "artificial sugars" instead.
The point of the Turing test and all the other contemporary discourse was never to establish a standard to determine whether a computing system could "think" or be "intelligent"; it was to establish that this is the wrong question. Intelligence is tightly coupled to volition and self-awareness (and expressing self-awareness in text does not demonstrate self-awareness; a book titled "I Am A Book" is not self-aware).
No, I cannot rigorously prove that humans (or other life) are intelligent by this standard. It's an axiom that emerges from my own experience of observing my thoughts. I think, therefore I think.
There’s also the effect of “machine learning” being used imprecisely so it inhabits a squishy middle between “computational statistics” and “AI.”
Also the section on hype is informative, but I really see (ofc writing this from peak hype) a difference this time around. I fund $1000 in Claude Code Opus 4 for my top developers over the course of this month, and I really do expect to get >$1000 worth of more work output. Probably scales to $1000/dev before we hit diminishing returns.
Would be fun to write a 2029 version of this, with the assumption that we see a similar crash as happened in ~87 but in ~27. What would some possible stumbling reasons be this time around?
Two unknowns: the true non-VC-subsidized cost and the effects of increasing code output and maintenance of the code asymptotically. There are also second order effects of pipelines of senior engineers drying up and costing a lot. Chances are if widespread longterm adoption, we’ll see 90% of costs going to fixing 10% or 1% of problems that are expensive and difficult to avoid with LLMs and expensive to hire humans for. Theres always a new equilibrium.
There are three key parallels that I see applying to today's AI companies:
1. Tech vs. business mismatch. The author points out that AI companies were (and are) run by tech folks and not business folks. The emphasis on the glory of tech doesn't always translate to effective results for their businesses customers.
2. Underestimating the implementation moat. The old expert systems and LLMs have one thing in common: they're both a tremendous amount of work to integrate into an existing system. Putting a chat box on your app isn't AI. Real utility involves specialized RAG software and domain knowledge. Your customers have the knowledge but can they write that software? Without it, your LLM is just a chatbot.
3. Failing to allow for compute costs. The hardware costs to run expert systems were prohibitive, but LLMs invoke an entirely different problem. Every single interaction with them has a cost, both inputs and outputs. It would be easy for your flat-rate consumer to use a lot of LLM time that you'll be paying for. It's not the fixed costs amortized over the user base, like we used to have. Many companies' business models won't be able to adjust to that variation.
Hype is fun. When you see the limits of a technology it often becomes boring even if it’s still amazing.
"Matchbox Educable Noughts and Crosses Engine" - https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_...
"The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961. It was designed to play human opponents in games of noughts and crosses (tic-tac-toe) by returning a move for any given state of play and to refine its strategy through reinforcement learning. This was one of the first types of artificial intelligence."
But customers of the AI startups very much wanted more mundane solutions, which the startup would then pivot to doing.
(For example, you do startup to build AI systems to do X, and a database system is incidental to that; turns out the B2B customers wanted that database system, for non-AI reasons.)
So a grad student remarked about AI startups, "First thing you do, throw out the AI."
Which was an awkward thing for students working on AI to say to each other.
But it was a little too early for deep learning or transformers. And the gold rush at the time was for Web/Internet.
That's because its the same mechanism at play. When people can't explain the underlying algorithm, they can't show when the algorithm would work and when it wouldn't. In computer systems, one of the truisms is that for the same inputs a known algorithm produces the same outputs. If you don't get the same outputs you don't understand all of the inputs.
But that helps set your expectations for a technology.
I coincidentally can run local LLMs (7900 XTX, 24 GB), but I almost never want to because the output of raw LLMs is trash.
clbrmbr•7h ago