frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLMs Are Great, but They're Not Everything

4•procha•11mo ago
Three years after ChatGPT’s release, LLMs are in everything—demos, strategies, and visions of AGI. But from my observer’s perspective, the assumptions we’re making about what LLMs can do seem to be drifting from architectural reality.

LLMs are amazing at unstructured information—synthesizing, summarizing, reasoning loosely across large corpora. But they are not built for deterministic workflows or structured multi-step logic. And many of today’s most hyped AI use cases are sold exactly like that.

Architecture Matters

We often conflate different AI paradigms:

    LLMs (Transformers): Predict token sequences based on context. Great with language, poor with state, goal-tracking, or structured tool execution.

    Symbolic AI / State Machines: Rigid logic, excellent for workflows—bad at fuzziness or ambiguity.

    Reinforcement Learning (RL): Optimizes behavior over time via feedback, good for planning and adaptation, harder to scale and train.
Each of these has a domain. The confusion arises when we treat one as universally applicable. Right now, we’re pushing LLMs into business-critical automation roles where deterministic control matters—and they often struggle.

Agentic Frameworks: A Workaround, Not a Solution

Agentic frameworks have become popular: LLMs coordinating with other LLMs in roles like planner, executor, supervisor. But in many cases, this is just masking a core limitation: tool calling and orchestration are brittle. When a single agent struggles to choose correctly from 5 tools, giving 10 tools to 2 agents doesn’t solve the problem it just moves the bottleneck.

Supervising a growing number of agents becomes exponentially harder, especially without persistent memory or shared state. At some point, these setups feel less like robust systems and more like committee members hallucinating their way through vague job descriptions.

The Demo Trap

A lot of what gets shown in product demos—“AI agents booking travel, updating CRMs, diagnosing errors”—doesn’t hold up in production. Tools get misused, calls fail, edge cases break flows. The issue isn’t that LLMs are bad it’s that language prediction is not a process engine.

If even humans struggle to execute complex logic reliably, expecting LLMs to replace structured automation is not vision it’s optimism bias.

On the Silence of Those Who Know Better

What’s most puzzling is the silence of those who could say this clearly: the lab founders, the highly respected researchers, the already-rich executives. These are people who know that LLMs aren’t general agents. They have nothing to lose by telling the truth and everything to gain by being remembered as honest stewards.

Instead, they mostly play along. The AGI narrative rolls forward. Caution is reframed as doubt. Realistic planning becomes an obstacle to growth.

I get it, markets, momentum, investor expectations. But still: it’s hard not to feel that something more ethical and lasting is being passed over in favor of short-term shine.

A Final Thought

I might be wrong—but it’s hard to ignore the widening gap between what LLMs are and what C-level execs and investors want them to be. Engineering teams are under pressure to deliver the Hollywood dream, but that dream often doesn’t materialize. Meanwhile, sunk costs pile up, and the clock keeps ticking. This isn’t pessimism it’s recognizing that hype has gravity, and reality has limits. I’d love to be proven wrong and happily jump on the beautiful AI hype train if it ever truly arrives.

Comments

designorbit•11mo ago
Love this perspective. You nailed the core issue: LLMs ≠ process engines. And agentic frameworks stacking roles often end up masking fragility instead of fixing it.

One thing I’ve been exploring is this middle ground—what if we stop treating LLMs as process executors, and instead make them contextual participants powered by structured, external memory + state layers?

I’m building Recallio as a plug-and-play memory API exactly for this gap: letting agents/apps access persistent, scoped memory without duct-taping vector DBs and custom orchestration every time.

Totally agree the dream won’t materialize through token prediction alone—but maybe it does if we reconnect LLMs with better state + memory infra.

Have you seen teams blending external memory/state successfully in production? Or are most still trapped inside the prompt+vector loop?

dpao001•11mo ago
What is your opinion on Manus. Is it closing in on AGI or is it as you suggest a sticking plaster waiting to break?

Humanoid Data

https://www.technologyreview.com/2026/04/21/1135656/humanoid-data-robot-training-ai-artificial-in...
1•gnabgib•1m ago•0 comments

Rapunzel: Tree style tabs for codex, Claude Code and Gemini

https://github.com/salmanjavaid/rapunzel/tree/main
1•WasimBhai•4m ago•1 comments

If an AI tutor that adapts to your learning style

https://tutoraimvp.netlify.app/index.html
1•Avia_Studio•6m ago•0 comments

1:59:30: Sabastian Sawe Shatters the 2-Hour Barrier at 2026 London Marathon

https://www.letsrun.com/news/2026/04/15930-sabastian-sawe-shatters-the-2-hour-barrier-at-2026-lon...
2•nradov•6m ago•0 comments

Remembering the 1984 Unix PC. Why did it fail so hard?

https://tech.slashdot.org/story/26/04/26/2038235/remembering-the-1984-unix-pc-why-did-it-fail-so-...
1•MilnerRoute•7m ago•0 comments

Claude Design Is Real Design

https://diverging.run/checkpoints/claude-design-is-real-design/
1•shay_ker•8m ago•0 comments

TRELLIS.2: Native and Compact Structured Latents for 3D Generation

https://microsoft.github.io/TRELLIS.2/
2•stavros•8m ago•0 comments

Two Athletes Break Sub-2-HR Marathon in Adizero Adios Pro Evo 3

https://news.adidas.com/running/two-adidas-athletes-sabastian-sawe-and-yomif-kejelcha-break-the-s...
1•canucker2016•10m ago•0 comments

New HEIC to JPG/PNG Converter

https://heyc.runtime-hub.com/
1•RunTimeZero•11m ago•0 comments

Charity Guiness record - 9 day stream raised almost 70mln USD for cancer

https://streamer.guide/blog/latwogang-breaks-guinness-record-charity-stream-2026
1•halonn•11m ago•0 comments

The New Linux Kernel AI Bot Uncovering Bugs Is a Local LLM on Framework Desktop

https://www.phoronix.com/news/Clanker-T1000-AMD-Ryzen-AI-Max
1•guerby•14m ago•0 comments

Anonymous IRQ Handlers

https://trident64.github.io/anonymous-irq-handlers/
1•adunk•15m ago•0 comments

Show HN: Tiao, A two-player turn-based board game

https://playtiao.com
1•trebeljahr•15m ago•0 comments

Show HN: AI memory with biological decay (52% recall)

https://github.com/sachitrafa/YourMemory
4•SachitRafa•16m ago•1 comments

Forcing Scammers to Pass a "Face Captcha" [video]

https://www.youtube.com/watch?v=odFq0xgTrko
1•akavel•18m ago•0 comments

Sawe smashes two-hour mark to 'move goalposts for marathon running'

https://www.bbc.com/sport/athletics/articles/crm1m7e0zwzo
2•berkeleyjunk•18m ago•0 comments

The Preservation Sequences, Part 1: Less Dead

https://nectome.substack.com/p/the-preservation-sequences-part-1
1•bcjordan•20m ago•0 comments

Show HN: Mdlens – Reduce token spend and boost retrieval on Markdown-heavy repos

https://github.com/Dreeseaw/mdlens
1•dreeseaw•20m ago•0 comments

At SpaceX, AI is burning the cash that Starlink earns

https://www.reuters.com/business/finance/spacex-ai-is-burning-cash-that-starlink-earns-2026-04-24/
4•JumpCrisscross•21m ago•0 comments

DeepSeek's new models are so efficient they'll run on a toaster by which we mean

https://www.theregister.com/2026/04/24/deepseek_v4/
2•Bender•22m ago•1 comments

More ancient Linux device support faces the chop

https://www.theregister.com/2026/04/24/ancient_linux_drivers_going/
2•Bender•23m ago•1 comments

I brought my husband back for his funeral as a hologram

https://www.bbc.com/news/articles/cm29qj3e294o
1•Brajeshwar•24m ago•0 comments

Microsoft tackles quality control issues. Just kidding

https://www.theregister.com/2026/04/24/microsoft_seeks_quality_improvements_by/
2•Bender•24m ago•1 comments

The Brain of Theseus – a thought experiment (2017)

https://www.spencergreenberg.com/2017/07/brain-of-theseus-a-thought-experiment/
1•bcjordan•25m ago•0 comments

Show HN: Build an AI to Detect Scammers/Gurus

https://www.falsoai.com/
1•liam-chen•25m ago•0 comments

Show HN: Cyberpunk mission control for AI agents, one HTML file

https://github.com/Audazia/solar-system-agents
1•carlosaudaz•25m ago•0 comments

Goingsecure – Private CISO for solo founders and AI builders

https://goingsecure.dev/
1•gertsen•27m ago•1 comments

Human Source License (HSL)

https://github.com/xdgrulez/human-source-license
11•xdgrulez•29m ago•9 comments

The Pareto principle is how AI takes jobs

https://www.msn.com/en-us/money/smallbusiness/the-pareto-principle-is-how-ai-actually-takes-jobs/...
2•galaxyLogic•33m ago•0 comments

I Bought Friendster for $30k – Here's What I'm Doing with It

https://ca98am79.medium.com/i-bought-friendster-for-30k-heres-what-i-m-doing-with-it-d5e8ddb3991d
4•ca98am79•33m ago•1 comments