frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Get the email address for GitHub username

https://www.val.town/x/stevekrouse/github-user-email
1•stevekrouse•56s ago•0 comments

Anthropic studied what gives an AI system its 'personality'

https://www.theverge.com/anthropic/717551/anthropic-research-fellows-ai-personality-claude-sycophantic-evil
1•naves•1m ago•0 comments

Better Alternative for Interview Preparation

https://www.lockedinai.com/
2•lockedinai•3m ago•0 comments

Making Human Videos Useful for Robotics

https://deplaceai.com/articles/motion2text/motion2text
1•pedromilcent•6m ago•0 comments

How Japan is quietly showing the world how to grow without economic growth

https://economictimes.indiatimes.com/news/new-updates/lower-wages-aging-population-but-still-prospering-how-japan-is-quietly-showing-the-world-how-to-grow-without-growth/articleshow/123014379.cms?from=mdr
3•methuselah_in•8m ago•0 comments

Lying increases trust in science, study finds

https://phys.org/news/2025-07-science.html
2•sparrish•9m ago•0 comments

Illumina to Pay $9.8M for Security Vulnerabilities in Genomic Sequencing Systems

https://www.justice.gov/opa/pr/illumina-inc-pay-98m-resolve-false-claims-act-allegations-arising-cybersecurity
2•impish9208•13m ago•0 comments

Trump Deploys Nuclear Submarines in Response to Dmitry Medvedev

https://trumpstruth.org/statuses/32336
5•stefankuehnel•16m ago•2 comments

A Nazi-Obsessed Amateur Historian Went from Obscurity to the Top of Substack

https://www.motherjones.com/politics/2025/07/martyr-made-darryl-cooper-nazi-jews-juggernaut-nihilism-tucker-carlson-joe-rogan-substack/
5•thomassmith65•17m ago•1 comments

5,090 days – The Cook era is now as long as the Jobs Renaissance era

https://birchtree.me/blog/5-090-days/
3•speckx•17m ago•1 comments

Show HN:typed - Markdown app for writers, students, professionals, and creators

https://play.google.com/store/apps/details?id=com.mazzestudios.typed&hl=en_US
2•htcni•19m ago•2 comments

The scary and surprisingly deep rabbit hole of Rust's temporaries

https://taping-memory.dev/temporaries-rabbit-hole/
2•lukastyrychtr•20m ago•0 comments

Router Bugs and Security Vulnerabilities (2021)

https://modemly.com/m1/pulse
3•transpute•22m ago•0 comments

Ask HN: Anyone using llms.txt on blogs? Worth it for AI search?

3•logic_node•25m ago•0 comments

How to Prove False Statements: Practical Attacks on Fiat-Shamir

https://eprint.iacr.org/2025/118
2•belter•25m ago•0 comments

Open‑source tool for async team updates

https://asyncstatus.com
3•heysound•26m ago•1 comments

Decant: Frontmatter-aware framework-agnostic wrapper for static content

https://github.com/benpickles/decant
2•thunderbong•26m ago•0 comments

Ask HN: Who Is Looking for a Cofounder?

10•dontoni•26m ago•0 comments

India to penalize universities with too many retractions

https://www.nature.com/articles/d41586-025-02364-6
2•rntn•26m ago•0 comments

How to Buy a Retail Property: What to Know Before Investing

https://www.loopnet.com/cre-explained/investing/how-to-buy-a-retail-property/
3•mooreds•27m ago•0 comments

What's Driving Your Retirement Account

https://www.nytimes.com/2025/08/01/opinion/economy-stocks-shock-patterns.html
2•mooreds•28m ago•1 comments

I couldn't submit a PR, so I got hired and fixed it myself

https://www.skeptrune.com/posts/doing-the-little-things/
5•skeptrune•28m ago•0 comments

A free online cap table simulator for YC applicants (and other builders)

https://www.captaible.com/
2•jvuillemot•29m ago•0 comments

Derivative Markets: 101

https://www.ice.com/support/education/trading-derivatives
2•mooreds•29m ago•0 comments

Hallmarks of cellular senescence: biology, mechanisms, regulations

https://www.nature.com/articles/s12276-025-01480-7
2•PaulHoule•29m ago•0 comments

Show HN: TraceRoot – Open-Source Agentic Debugging for Distributed Services

https://github.com/traceroot-ai/traceroot
5•xinweihe•29m ago•0 comments

How Filter Pushdown Works

https://materialize.com/blog/how-filter-pushdown-works/
3•Bogdanp•30m ago•0 comments

AI-powered pixels: Introducing Google's Satellite Embedding dataset

https://medium.com/google-earth/ai-powered-pixels-introducing-googles-satellite-embedding-dataset-31744c1f4650
2•instagraham•31m ago•0 comments

Corporation for Public Broadcasting Ceasing Operations

https://cpb.org/pressroom/Corporation-Public-Broadcasting-Addresses-Operations-Following-Loss-Federal-Funding
28•coloneltcb•32m ago•1 comments

Is dental amalgam safe for humans? (2011)

https://pmc.ncbi.nlm.nih.gov/articles/PMC3025977/
3•nateb2022•33m ago•0 comments
Open in hackernews

A Hitchhiker's Guide to the AI Bubble

https://fluxus.io/article/a-hitchhikers-guide-to-the-ai-bubble
81•dreamfactored•20h ago

Comments

fullstick•19h ago
We're building AI workflows at my company. Yes chatbots, but also more interesting/complex workflows that I won't get into. Let's just say we have the data, expertise, and industry structure to leverage AI in valuable and useful ways.

As an engineer, development still comes down to requirements gathering, solid engineering principles, and the tools we already have at our disposal - network calls, rendering the UI, orchestrating containers and job, etc.

All that is to say that I thought AI was going to be sexy, like Westworld, and not so boring...

brokencode•18h ago
Boring is where the money is. Always has been.

Westworld robots are still a long way off, but think about how far we’ve come so quickly.

It’s pretty incredible that natural language computing is now seen as boring when it barely even existed 5 years ago.

bayesic•19h ago
> Sam Altman knew exactly which buttons to push. Congressional testimony about the need for regulation (from the company furthest ahead). Warnings about AI risk. OpenAI's playbook: Build in public, warn about dangers, present yourself as the responsible actor who needs resources to "do it safely."

And this is why Matt Levine calls Sam Altman the greatest business negger of all time

lnkl•19h ago
>Article praising LLMs.

>Look inside.

>Written by someone having a stake in LLM business.

Every time.

EA-3167•18h ago
Hey, at least this one is willing to admit that they aren't building Machine Jesus. That's a start.
upghost•18h ago
"I come to bury Caesar, not to praise him..."

A rhetorical technique as old as dirt, but apparently still effective.

EA-3167•18h ago
"Now let it work. Mischief, thou art afoot, take thou what course thou wilt!"

But seriously, it isn't on me to justify my skepticism of the extreme claim, "We are in a race to build machine super-intelligence" because that skepticism is the rational default. Instead it's the burden of people who claim that we are in fact in that race, just like "self driving next year" was a claim for others to prove, just like "Crypto is the future of money" is a statement requiring a high degree of support.

We've seen this all before, and in the end the argument in favor seems to boil down to, "Look at how much money we're moving around with this hype" and "Trust us, the best is yet to come."

Maybe this time it will.

upghost•18h ago
Just to clarify, I meant the rhetorical technique was being employed by the author of the article. He's downplaying the "AGI race" in order to normalize and validate the byproduct of the hype bubble to be as "normal and reliable as electricity and TCP/IP". It's clearly meant to attempt to disarm and appeal to skeptics, but there is more than enough dog whistling and performative contradiction in there to make it clear the true intention of the article -- praising Caesar.

For the record, I would be more inclined to be sympathetic towards the author if any receipts (i.e., repos) were produced at all, but as you so correctly stated, extraordinary claims require extraordinary evidence.

I agree you do not have burden of defending the author's claims, apologies if that was not clear.

claw-el•18h ago
Alternatively, would someone not having a stake in LLM business have an incentive to disparage LLMs?
almostgotcaught•18h ago
Lol that makes zero sense - not having a stake in something is literally the definition of not having an incentive.
ej88•14h ago
Not having a stake in something currently rocketing up in value is certainly a cause for FOMO and / or incentive to disparage it.
ants_everywhere•18h ago
Lots of people have a stake in disparaging AI. That's why there are so many low quality anti-LLM comments on HN lately
bwfan123•18h ago
right yea,

Whens the last time you saw management tell you which compiler or toolchain you need to use to build your code ? But now we have CEOs and management dictating how coding should be done.

In the article the author admits: "I started coding again last year. But I hadn't written production code since 2012" and then goes on to say: "While established developers debate whether AI will replace them, these kids are shipping.".

Then I ask myself, what are they selling ? and lo and behold, it is AI/ML consulting.

lavarnann•8h ago
Every praise of LLM is invariably preceded by some form of "I don't really understand their output but it looks great". That right there is the strongest signal I've caught so far that the whole thing is just a funny money pyramid.

In Sirens of Titan Vonnegut tells a story where governments decided to boost the space industry to drive aggregate demand.

This is exactly what is happening. When you realize that the whole thing is predicated on building and selling more $100,000 GPUs (and the solution to every problem therein is to use even more GPUs), everything really comes into focus.

esafak•19h ago
At least pick a Douglas Adams book cover!
aeon_ai•19h ago
The author misses the deeper game: if you genuinely believe AGI is imminent, then current economic metrics become meaningless. Why optimize for revenue when the entire concept of scarcity-based economics dissolves?

The $560B for those who believe in AGI isn't about ROI using today's money-in/money-out formula; it's about power positioning for a post-capitalist transition.

Every major player knows that whoever controls the infrastructure once the threshold is crossed might control what comes after.

The "bubble" narrative assumes these actors are optimizing for quarterly returns rather than civilizational leverage.

chromanoid•19h ago
I think the author addresses this, but dismisses it as fantasy, which constitues the bubble.
zmmmmm•18h ago
The problem with this is that it's entirely evidence free.

I could also say, if you truly believe nuclear fusion is imminent we will have infinite free energy and all current economic metrics are meaningless. But there is no nuclear fusion bubble. Why not? Because people don't believe nuclear fusion is imminent. But for some reason they do believe AGI is imminent - despite there being no actual evidence of that. There is probably less understanding of what is needed to close the gap to true AGI than there is to close the gap to make nuclear fusion possible.

The only distinction here is what people are willing to "believe" based on pure conjecture - which is why I class it as a true bubble.

teeray•18h ago
> The only distinction here is what people are willing to "believe" based on pure conjecture

It’s a religion. Repent now, the AGI is coming.

giantg2•18h ago
"The problem with this is that it's entirely evidence free."

That's more for less true for predicting any new financial trend.

If AI is making devs 20-30% more efficient, then you could invest in tech stocks if you think they can ship as much with lower overhead. The financial metrics look better if that's true.

asdev•19h ago
The biggest issue with AI isn't AI itself, but the fact that it seemingly "saved" an overinflated economy. Economy needs a deep reset with high rates for longer and the AI narrative is just kicking the can down the road
PessimalDecimal•16h ago
This seems like the point of the hype and why it has so much traction.

I suspect this hype cycle won't end until a new one forms, whether technology or some catastrophic event (disease, war) changes focus and allows the same delaying tactics.

lavarnann•7h ago
Every disaster, man-made or not, will be used to drive shock therapy.

Look at stock prices trajectory before and after COVID.

When this bubble bursts, the ensuing chaos will be used in a similar manner.

chromanoid•19h ago
Great article! I share the experience mentioned in the article, LLMs facilitate a head-on interaction with any topic. It is similar to instructional YouTube videos (that imo were already transformative) but with the ability to ask detailed questions. And this is what becomes better with each iteration. When creative communities finally settle down on generative AI there will be not just a plethora of AI slop, but so much highly creative never seen before content. It might lead to a new golden age of indie low budget movie productions.
exasperaited•18h ago
There’s already a new golden age of indie low budget movies. Those guys will not use AI to generate significant parts of their content, because it defeats the point of making an indie movie at all.

I never cease to be shocked at how little tech people think of what creative people do and why they do it.

chromanoid•18h ago
You seem to have an agenda here. I am sure there are many many visions of special effects and story arcs that could never be realized because of not being able to pull it off. This will change now. Green screens and sophisticated SFX tech will not be necessary to create fantastical images. You may call these kind of movies low brow entertainment but I am very curious to see indie movie interpretations of my favorite litrpg books.
exasperaited•17h ago
An agenda? I just give a shit about the creative people (indie filmmakers, photographers, artists, actors, models) I know, and I fail to see what AI brings them that their creativity does not already; special effects are such a tiny part of filmmaking, for example.

I don’t mean to say I don’t think there are any uses but I think the main misunderstanding here is that what holds indie filmmakers back isn’t access to technology, generally.

chromanoid•17h ago
Well, at least I assume that currently indie movies are also somewhat defined by budget and technical limitations. With GenAI you will be able to film an action scene with your smartphone in an empty warehouse that will later look like an authentic full street in Medieval Bagdad. GenAI will remove constraints. Constraints that may have led to creativity by themselves, but those constraints also led to constraints in audience and artistic outcome. Imagination will be the limit. And I don't think we will need labels like "organic" to make collaborative efforts with actual actors more accepted than AI only productions, because good actors bring more to the table than just their face and stature.
exasperaited•4h ago
> GenAI will remove constraints.

I think this really hits on the difference in our understanding because constraints are what cause actual creativity and art to happen.

A lack of constraints is why big-budget movies are so tedious. Lower budget movies are better because of their constraints.

Peritract•4h ago
Jaws is an iconic horror film partly because the mechanical shark kept breaking, forcing them to do more with atmosphere and less with animatronics.
exasperaited•3h ago
Yes.

A more obvious example is The Blair Witch Project, which cost less than a million dollars even after all the marketing was done (and cost essentially nothing to make).

The original Halloween was a very low-budget movie considering how long it took to shoot.

Vin Diesel's career was established by his own movie, Strays, which cost less than $50K. Which is zero budget, essentially, for a film that opened at Sundance.

Away from films there are many, many examples of massively popular albums and songs that were made essentially for nothing off the back of simple constraints and creativity.

In the long run, the only way artists will use AI effectively is by deciding on constraints that limit its use.

Because as soon as you don't limit its use, anyone can do what you can do.

So I tend towards thinking that AI won't really move the needle in terms of human creativity. It may reframe it. But nobody is going to be liberated creatively by it.

Tech people, I suspect, tend to assume that AI brings "full creative freedom" to artists the same way a patron does when they say "you can have full creative freedom".

It's not the same kind of freedom.

chromanoid•2h ago
I totally see that. But I think it's time for new constraints that are less tied to money and more to the imagination of the creators.

It will hopefully lead to a democratization of previously expensive settings (e.g. historic, fantastical, large scale events) etc. Many indie movies still have huge budgets and need some kind of sponsor. Now we will hopefully see a wonderful mix of hobbyist, semi-professional and professional fully independent setups that tell stories without worrying about financial risks that are connected to certain forms of artistic expression.

I don't think it is helpful to gatekeep movie making with arbitrary requirements regarding AI usage nor do I believe that the requirement for patrons or state sponsorships that is prevalent in indie movie making are a good thing regarding the current neo-feudal and authoritarian currents.

exasperaited•18m ago
> I don't think it is helpful to gatekeep movie making with arbitrary requirements regarding AI usage

I am not gatekeeping at all; I don't understand this argument that this could ever be perceived as gatekeeping. I'm just saying that in my own experience, indie creators tend to perceive generative AI as bullshit, not as liberation.

Artists who tell you that AI is not helping art are not gatekeeping either.

chromanoid•12m ago
Of course, but AI shouldn't hinder art either. I can understand the sentiment, but if GenAI can help somewhere, it is creative endeavors.
ants_everywhere•18h ago
Of course they will.

The world is full of creative people and some of them will make movies with AI. Those are indie film makers.

handbanana_•19h ago
>While established developers debate whether AI will replace them, these kids are shipping. Developers who learned their craft in the age of pull requests and sprint planning sneer at their security failures, not realizing that 'best practices' are about to flip again. The barbarians aren't at the gate. They're deploying to production.

Shipping where? What production? What kids? I've yet to see this. I see the tools everywhere, but not anything built with them. You'd think it would be getting yelled about from the mountaintops, but I'm still waiting.

worldsayshi•18h ago
What would qualify as proof? If somebody builds a good product and ships it it will just look like a good product. People will call it vibe coded slop when it fails spectacularly.
layer8•18h ago
If there isn't a strong uptick in the general quality and usefulness of software within the next couple of years, then it's not clear what AI coding/design is actually buying us. Other than possibly some cost reduction, but it would be optimistic to assume that the savings go to the users and not to big tech. Regardless, the proof will be in the pudding.
handbanana_•17h ago
I mean they would provide it--you would think this is something the AI coding businesses would be highlighting. "Here's an app tends of thousands use every day built with our AI tools!"

Heck they did it with languages for the longest time. Here's twitter, we built it on Rails, everyone use Rails! Facebook, built on PHP, everyone use PHP! Feels weird that if these AI tools are doing all this work that no one is showing it off.

bwfan123•18h ago
I think what has happened is the following:

A whole bunch of folks got into management thinking coding is beneath them, they are now wielding the power - let the code-monkeys do the typing. Then, turns out, coders are continuing to call the shots, and the management folks have coder-envy.

Now, with LLMs, coding is again not only within management's reach, but they think it is trivial, and it can be outsourced to the LLM code-monkeys, and management has regained power from the pesky coder-class.

So, you have management of all stripes "shipping" things, and dictating what coders should do - not realizing that they should stay in their lanes, and let coders decide for themselves what works best in their craft.

0xcafefood•15h ago
This is a really interesting point. Managers are the _only_ people I've heard say things like "it's only a matter of time till all coding interviews are just 'write a prompt to...'" or "soon all coding will just be LLMs writing machine code directly."

It's struck me as odd that managers of software engineers would seek to negate the field of software development almost completely. But maybe you're onto something.

gargalatas•18h ago
I totaly agree with the author. Not even the smartphone or the iphone brought such a sudden change to so many people and in many cases, for free. I know we want to oppose this huge thing just because it doesn't make sense moraly but when you learn using this tool there is no way back. Just imagine what is coming in the next 5-10 years. Even if the tools remain at the same level as today, people have learned to use it so well that ever sector every industry will speed up tremendously. We will see great new products and ideas emerging. Just can't wait for the revolution.
doctorwho42•18h ago
Or we will see people become too dependent on it to see the forest for the trees on countless problems throughout the systems of society and business.
upghost•18h ago
> Within weeks I built a serverless system processing 5 million social media posts daily, tracking topic clusters and emerging narratives in real-time. Then brand monitoring dashboards. Then a "robojournalist" that could deep-dive any trending story. Then hardware and firmware specs for a coffee machine. Then my first mobile app.

I call bullshit. Let's see some repos.

AznHisoka•18h ago
5 million social media posts is like <1% of all the posts out there. Its just a weekend project
fourthark•17h ago
This is all stuff for private use. It's credible that the author built these things with LLMs, not that they are secure or robust.
zmmmmm•18h ago
I think a key point from this article that I agree strongly with is the simple point that it is crucial that everyone recognise we are currently in an AI bubble.

I often find people contest this with the non-sequitur of "No, it's not a bubble, there is real value there. We are building things with it". The fact there is real value in the technology does not contradict in any way that we are in a bubble. It may even be supporting evidence for it. Compare with the dot com bubble : nobody would tell you there was no value in the internet. But it was still a bubble. A massive hyper inflated bubble. And when it popped, it left large swathes of the industry devastated even while a residual set of companies were left to carry on and build the "real" eventual internet based reworking of the entire economy which took 10 - 15 years.

People would be well advised to have a look at this point in time at who survived the dot com bubble and why.

worldsayshi•18h ago
Once there's a consensus around a bubble the bubble has already burst?
asdev•18h ago
everyone does NOT recognize it, just go on Twitter if you don't think so
entropsilk•18h ago
The fact everyone thinks we are in an AI bubble is practically proof we are not in an AI bubble.

The crowd is always wrong on these things. Just like everyone "knew" we were going into a deep recession sometime in late 2022, early 2023. The crowd has an incredibly short memory too.

What it means is that people are really cautious about AI. That is not a self reinforcing, fear of missing out, explosive process bubble. That is a classic bull market climbing a wall of worry.

layer8•18h ago
But it's not true that everyone thinks we are in an AI bubble.
UncleOxidant•12h ago
Most people don't think they're in a bubble until it starts to pop.
layer8•6h ago
So you seem to be opposing entropsilk‘s argument, same as I did.
PessimalDecimal•18h ago
There is absolutely FOMO. It's even being deliberately stoked. "AI won't take your job. People using AI will." This is this hype cycle's "have fun being poor."
dustingetz•18h ago
technical ICs actually trialing the AI tools think we’re in a bubble. Executives, boards, directors and managers are still tumbling head over heels down the mountain in a race to shovel more money into the fire, because their engineering orgs are not delivering results and they are desperate to find a solution
stego-tech•18h ago
This. Highly competent technical ICs in my circles continue to (metaphorically) scream at their Juniors submitting AI slop and being unable to describe what it's doing, why it's doing it that way, or how they could optimize it further, since all management cares about is "that it works".

Current models excel because of the corpus of the open internet they (stole from) built off of. New languages aren't likely to see as consistent results as old ones simply because these pattern matchers are trained on past history and not new information (see Rust vs C). I think the fact nobody's minting billions turning LLMs into trading bots should be pretty telling in that regard, since finance is a blend of relying on old data for models and intuiting new patterns from fresh data - in other words, directly targeting the weak points of LLMs specifically (inability to adapt to real-time data streams over the long haul).

AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that. Rather, we're at a crossroads like you say: stakeholders want more money and higher returns (which AI promises), while the people doing the actual work are trying to highlight that internal strife and politics are the holdups, not a lack of brute-force AI. Meanwhile both sides are trying to rattle the proverbial prison bars over the threats to employment real AI will pose (and the threats current LLMs pose to society writ large), but the booster side's actions (e.g., donating to far-right candidates that oppose the very social reforms AI CEOs claim are needed) betray their real motives: more money, less workers, more power.

rightbyte•18h ago
> AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that.

Is this the consensus on nomenclature? I though "AI doomers" was people thinking some dystopia will come out of it due to it. In that case I've read so much text wrong.

stego-tech•15h ago
At this point, my perspective is that the bubble talk has effectively boiled viewpoints into booster or doomer camps based solely on one’s buy-in of the argument these companies have created actual intelligence that can wholesale replace humans. There doesn’t seem to be much room for nuance at the moment, as the proverbial battle lines have been drawn by the loudest voices on either side.
giantg2•18h ago
We might be in a bull market. The question is for how long. I would guess less than a year considering the market-wide P/E.
ryoshu•18h ago
Not really. Worked through the dotcom bubble. It was obvious to some people on the ground doing the work. It was obvious to some execs who took advantage of it. Feels similar. Especially if you are burning through tokens on Gemini CLI and Claude Code where the spend doesn’t match the outcomes.
DrewADesign•17h ago
I saw someone earnestly say that a business model with potential to generate actual revenue was no longer relevant, and companies need only generate enough excitement to draw investors to be successful because “the rules have changed.” At that moment, I saw that telltale soapy iridescent sheen. I’ve heard that before.

I’m worried that the US knowledge industries jumped the shark in the teens and have been living off hopeful investors assuming the next equivalent of the SaaS revolution is right around the corner, and AI for whatever reason just won’t change things that much, or if it does, the US tech industry will fumble it, assuming their resources and reputations will insulate them from the competition, just like the tech giants of the 90s vs Internet startups. If that’s true, some industries like biotech will still do fine, but the trajectory of the tech sector, generally, will start looking like that of the manufacturing sector in the 90s.

cj•18h ago
I agree, although bubbles don’t always have to pop in huge ways like it did in the dot com crash.

E.g. crypto displayed many, many characteristics of a bubble for a number of years, but the crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger, rather than popping in a fantastical way. (Not to say it still can’t, of course)

Then again, this bubble is different in that it has engulfed the entire US economy (including public companies, which is the scary part since the damage potential isn’t limited to private investors). If there’s even a 10% chance of it popping, that’s incredibly frightening.

xsmasher•16h ago
Cryptocurrencies have survived and thrived, but anyone who went all-in on NFTs or blockchain gaming (or anything other than currency on the blockchain?) has been zeroed out.
libraryofbabel•16h ago
I think this is a really insightful point. Even if we are in a bubble now, in the sense that current LLM technology (impressive though it is) does not quite live up to the huge valuations of AI companies, there is a plausible future in which we get enough technological progress in the next few years that the bubble never really pops and we are able to morph into a new AI-driven economy without a crash. There are probably good historical examples of this happening with other technologies, although it’s hard to identify them because in retrospect it looks like the optimists invested rationally, even though their bets maybe weren’t all that justified at the time.

I personally think a crash is more likely than not, but I think we should not assume that history will follow a particular pattern like the dot com bust. There are a variety of ways this can go and anyone who tells you they know how it’s all going to shake out is either guessing or trying to sell you something.

It is for sure an interesting time to be in the industry. We’ll be able to tell the next generation a lot of stories.

zmmmmm•14h ago
It's a good analysis.

For me the big concern is really the level of detachment from reality that I'm seeing around time scales. People in the startup world seem to utterly fail to appreciate the complexity of changing business processes - for any type of change, let alone for an immature tech where there are still fundamental unsolved problems. The only way for the value of AI to be realised is for large scape business adoption to happen, and that is simply not achievable in the 2 years of runway most of these companies seem to be on.

lavarnann•9h ago
> The crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger

Bitcoin is now worth 2.3 trillion dollars. The price graph looks like a hockey stick. For tokens in a self contained ledger system.

You may be conflating hype and bubble.

digitcatphd•18h ago
Agreed. I think most of the arguments are premised on AI getting infinitely better for some reason, without anyone outlining clear arguments addressing the inherent architectural limitations of today's LLMs. I have been in this space since 2021 and honestly, beside maybe voice and Gemini Deep Research, things aren't much better than GPT-4.
time0ut•18h ago
The article resonates. AI coding assistants are cool and fun to use, but they just help solve a solved problem faster. The really exciting thing is exploring the new problems we can solve with this tool. It has really reignited my passion for building.
hnthrow90348765•18h ago
Businesses realizing a lot of their problems are already solved will be of great help to developers.

I'm extremely tired of bespoke solutions when OTS or already-known would work just fine.

coarise•18h ago
The entire article revolves around the premise of AGI will not be achieved, Which is unjustified. Making reading the article a waste of time.
lgleason•17h ago
Have we made some significant advancements similar to what happened with the Internet back in the 90's with HTML? Yes. But, this is a bubble we are currently in just like the .com bubble with lots of irrational exuberance.

That said, that job market is not as crazy as it was during .com, in fact right now most technologists are finding it more difficult to find work at the moment. Most of this AI hype started when the employment market started to slow down. Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts when crazy money enters the picture. So if, for example the fed really starts to cut rates and/or investment starts to really pick up and we have another boom period, the tail end of that seems to historically be when the bubbles pop.

Put another way, there is a good chance that the bubble will continue to inflate for a few years before it pops.

UncleOxidant•12h ago
> Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts

Meta has been offering 7 figure salaries for AI talent. This is a very different bubble from the .com bubble. The hiring frenzy in this limited to a very small group of people with unique skills/experience that few people posses. While at the same time thousands of other people are being let go in order to pay those big salaries to a few people (and in order to buy more GPUs). The C-suite has become obsessed with the idea that they're going to need much fewer engineers and they're hiring/firing like it.

stego-tech•17h ago
Not a bad position to take, and very similar to my personal one (that gets immediately conflated as "LOL AI DOOMER" by the AI Booster Club): yes, this is a bubble, and yes, it will eventually pop, but the tools won't go away. What's been democratized isn't the entirety of human skills, but the narrow field of custom ML-based tooling, and that's going to change quite a lot in the decades ahead as people utilize them in unexpectedly novel ways.

It'll never be AGI or superintelligence, it won't create or cause the singularity, and it'll never be a substitute for learning, practicing, and honing skills into mastery. For the fields LLMs do displace in part or in whole, I still expect it'll largely displace the mediocre or the barely-passable, not the competent or experts. Those experts will, once the bubble pops and the hype train derails, find the novel and transformative uses for LLMs outside of building moats for big enterprises or vamping for investor capital.

I especially enjoy the on-prem/locally-run angle, as I think that is where much of the transformation will occur - in places like homes, small offices, or private datacenters where a GPU or two can accelerate novel tasks for the entity using it, without divulging data to corporate entities or outright competitors. Inference is cheap, and a modest gaming GPU or AI accelerator can easily support 99.9% of individual use cases offline, with the right supporting infrastructure (which is improving daily!).

All in all, an excellent post.

stillpointlab•17h ago
This is a good article, but it has a flaw that I keep seeing in these. Articles like this say "I built this app, and that app, and another app, and another one". Ok, let's see them. Are they any good? Please post the github link, or link to the webpage.

I'm reminded of the motto of the Royal Society: Nullius in verba.

ankit219•17h ago
Somewhat related to the article, but mostly anecdotal. In SF, i have had chats with (~15) engineers who after some prodding admitted that they feel the whole AGI thing is passing them by (not that it is close). In a sense that they want to be doing something deeper, build/research something which is more than an API call (paraphrasing and not disparaging making API calls), and want to build where the action is (read: train models or be at the forefront). I understand you need a specific skillset to be in that position, just that it's slightly off putting that to do any meaningful work in this field you need a lot of compute. I understand they raised funding and what not, yet want something more than they are working on. I am not sure of the solution, but the cause sure seems to be the hype that is created currently.
commenter711•6h ago
Slop-esque article written by someone with a clear bias. At least the linked articles were a nice read...