frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLMs Are Great, but They're Not Everything

4•procha•6mo ago
Three years after ChatGPT’s release, LLMs are in everything—demos, strategies, and visions of AGI. But from my observer’s perspective, the assumptions we’re making about what LLMs can do seem to be drifting from architectural reality.

LLMs are amazing at unstructured information—synthesizing, summarizing, reasoning loosely across large corpora. But they are not built for deterministic workflows or structured multi-step logic. And many of today’s most hyped AI use cases are sold exactly like that.

Architecture Matters

We often conflate different AI paradigms:

    LLMs (Transformers): Predict token sequences based on context. Great with language, poor with state, goal-tracking, or structured tool execution.

    Symbolic AI / State Machines: Rigid logic, excellent for workflows—bad at fuzziness or ambiguity.

    Reinforcement Learning (RL): Optimizes behavior over time via feedback, good for planning and adaptation, harder to scale and train.
Each of these has a domain. The confusion arises when we treat one as universally applicable. Right now, we’re pushing LLMs into business-critical automation roles where deterministic control matters—and they often struggle.

Agentic Frameworks: A Workaround, Not a Solution

Agentic frameworks have become popular: LLMs coordinating with other LLMs in roles like planner, executor, supervisor. But in many cases, this is just masking a core limitation: tool calling and orchestration are brittle. When a single agent struggles to choose correctly from 5 tools, giving 10 tools to 2 agents doesn’t solve the problem it just moves the bottleneck.

Supervising a growing number of agents becomes exponentially harder, especially without persistent memory or shared state. At some point, these setups feel less like robust systems and more like committee members hallucinating their way through vague job descriptions.

The Demo Trap

A lot of what gets shown in product demos—“AI agents booking travel, updating CRMs, diagnosing errors”—doesn’t hold up in production. Tools get misused, calls fail, edge cases break flows. The issue isn’t that LLMs are bad it’s that language prediction is not a process engine.

If even humans struggle to execute complex logic reliably, expecting LLMs to replace structured automation is not vision it’s optimism bias.

On the Silence of Those Who Know Better

What’s most puzzling is the silence of those who could say this clearly: the lab founders, the highly respected researchers, the already-rich executives. These are people who know that LLMs aren’t general agents. They have nothing to lose by telling the truth and everything to gain by being remembered as honest stewards.

Instead, they mostly play along. The AGI narrative rolls forward. Caution is reframed as doubt. Realistic planning becomes an obstacle to growth.

I get it, markets, momentum, investor expectations. But still: it’s hard not to feel that something more ethical and lasting is being passed over in favor of short-term shine.

A Final Thought

I might be wrong—but it’s hard to ignore the widening gap between what LLMs are and what C-level execs and investors want them to be. Engineering teams are under pressure to deliver the Hollywood dream, but that dream often doesn’t materialize. Meanwhile, sunk costs pile up, and the clock keeps ticking. This isn’t pessimism it’s recognizing that hype has gravity, and reality has limits. I’d love to be proven wrong and happily jump on the beautiful AI hype train if it ever truly arrives.

Comments

designorbit•6mo ago
Love this perspective. You nailed the core issue: LLMs ≠ process engines. And agentic frameworks stacking roles often end up masking fragility instead of fixing it.

One thing I’ve been exploring is this middle ground—what if we stop treating LLMs as process executors, and instead make them contextual participants powered by structured, external memory + state layers?

I’m building Recallio as a plug-and-play memory API exactly for this gap: letting agents/apps access persistent, scoped memory without duct-taping vector DBs and custom orchestration every time.

Totally agree the dream won’t materialize through token prediction alone—but maybe it does if we reconnect LLMs with better state + memory infra.

Have you seen teams blending external memory/state successfully in production? Or are most still trapped inside the prompt+vector loop?

dpao001•6mo ago
What is your opinion on Manus. Is it closing in on AGI or is it as you suggest a sticking plaster waiting to break?

Appstinence

https://appstinence.org/
1•indus•1m ago•0 comments

James Watson Saw the True Form of DNA. Then It Blinded Him.

https://www.nytimes.com/2025/11/16/opinion/james-watson-dna.html
1•voxadam•2m ago•1 comments

Deep Dive into FFmpeg 8.0

https://www.rendi.dev/post/ffmpeg-8-0-part-1-using-whisper-for-native-video-transcription-in-ffmpeg
3•dutzi•4m ago•1 comments

Chrome extension permission usage stats

https://chrome-stats.com/permission
1•hao1300•4m ago•0 comments

The Only GM EV1 Ever Publicly Sold, and Where It's Going Next

https://www.theautopian.com/how-the-only-gm-ev1-ever-sold-didnt-get-crushed-and-where-its-going-now/
1•zdw•6m ago•0 comments

How to write prompts for voice AI agents

https://layercode.com/blog/how-to-write-prompts-for-voice-ai-agents
1•mooreds•8m ago•0 comments

Legal Restrictions on Vulnerability Disclosure

https://www.schneier.com/blog/archives/2025/11/legal-restrictions-on-vulnerability-disclosure.html
1•zdw•8m ago•0 comments

Rails, Roads and AI Reporting

https://inconvo.com/blog/rails-roads-and-ai-reporting/
2•ogham•8m ago•0 comments

Microsoft's Agent 365 Wants to Help You Manage Your AI Bot Army

https://www.wired.com/story/microsoft-ai-agent-365/
1•mooreds•8m ago•0 comments

Larry Summers resigns from OpenAI board following release of Epstein emails

https://www.nbcnews.com/tech/tech-news/larry-summers-resigns-openai-board-jeffrey-epstein-emails-...
5•ortusdux•9m ago•0 comments

Can Open-Source AI Introspect?

https://joshfonseca.com/blogs/introspection
1•homarp•10m ago•1 comments

This Ain't Yer Grandaddy's C (Tricks for Writing Gorgeous C)

http://spader.zone/tricks/
1•dboon•12m ago•0 comments

Patent for Hexadecimal Abacus

https://patents.google.com/patent/US4812124A/en
1•jbki•13m ago•0 comments

We're (now) moving from OpenBSD to FreeBSD for firewalls

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/OpenBSDToFreeBSDMove
2•zdw•16m ago•0 comments

Guide to responsible AI implementation in healthcare

https://dimesociety.org/ai-implementation-in-healthcare-playbook/
1•debo_•16m ago•0 comments

Discovery of a Special Type of Immune Cell That Slows Aging in Mice

https://www.nature.com/articles/s43587-025-00953-8
1•stevenjgarner•16m ago•0 comments

Show HN: SemanticsAV – Free, offline AI malware scanner for Linux

https://github.com/metaforensics-ai/semantics-av-cli
1•mf-skjung•17m ago•0 comments

Show HN: Baserow 2.0 – Self-hosted no-code data platform with automations and AI

https://baserow.io/blog/baserow-2-0-release-notes
2•bram2w•17m ago•0 comments

An Agent Framework with Hardware Feedback for CUDA Kernel Optimization

https://arxiv.org/abs/2511.01884
1•PaulHoule•18m ago•0 comments

New lab-made bone marrow model is a bioengineering first

https://www.popsci.com/health/human-bone-marrow-model/
1•Brajeshwar•20m ago•0 comments

AI is about to face an enormous test. The market is nervous

https://www.cnn.com/2025/11/19/markets/nvidia-us-stock-market
1•mooreds•20m ago•0 comments

Archaeologists may have uncovered a Bronze Age metropolis in Kazakhstan's steppe

https://www.cnn.com/2025/11/18/science/semiyarka-bronze-age-eurasia-steppe
2•Brajeshwar•20m ago•0 comments

CDC data confirms US is 2 months away from losing measles elimination status

https://arstechnica.com/health/2025/11/cdc-data-confirms-us-is-2-months-away-from-losing-measles-...
4•LordAtlas•21m ago•0 comments

The Commodification of Minimalism (2023)

https://quinnmaclay.com/texts/minimalism
1•Brajeshwar•21m ago•0 comments

Aligning brains into a shared space improves their alignment with LLMs

https://www.nature.com/articles/s43588-025-00900-y
2•stevenjgarner•22m ago•0 comments

Open Source Distributed AI Stack: ArgoCD, MicroK8s, VLLM, and NetBird

https://old.reddit.com/r/netbird/comments/1oyivxr/we_ran_an_experiment/
1•devildriver89•23m ago•0 comments

One of America's most dangerous volcanoes will soon power homes

https://www.washingtonpost.com/climate-solutions/2025/11/19/volcano-geothermal-energy/
1•pseudolus•23m ago•1 comments

Epic announces partnership to bring Unity games into Fortnite

https://www.gamesindustry.biz/epic-announces-partnership-to-bring-unity-games-into-fortnite
2•lairv•23m ago•1 comments

Arc Raiders and the Ethical Use of Generative AI in Games

https://www.aiandgames.com/p/arc-raiders-and-the-ethical-use-of
1•cpeterso•23m ago•0 comments

The AI Bubble with Tim El-Sheikh

https://www.machine-ethics.net/podcast/the-ai-bubble-with-tim-el-sheikh/
2•bbyford•24m ago•0 comments