frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: Why aren't local LLMs used as widely as we expected?

4•briansun•18h ago
On paper, local LLMs seem like a perfect fit for privacy‑sensitive work: no data leaves the machine, no margin cost, and can access local data. Think law firms, financial agents, or companies where IT bans browser extensions and disallows cloud AI tools on work machines. Given that, I’d expect local models to be everywhere by now—yet they still feel niche.

I’m trying to understand what’s in the way. My hypotheses (and I’d love corrections):

1) People optimize for output quality over privacy. 2) Hardware is far behind. 3) The tool people truly want (e.g., “a trustworthy, local‑only browser extension”) have yet to emerge. 4) No one has informed your lawyer about this—for now. 5) Or: adoption is already happening, just not visible.

It’s possible many teams are quietly using Ollama in daily work, and we just don’t hear about it.

Comments

codeptualize•18h ago
I think there are two cases:

1. Self hosting

2. Running locally on device

I have tried both, and find myself not using either.

For both the quality is less than the top performing models in my experience. Part of it is the models, part might be the application layer (chatgpt/claude). It would still work for a lot of use cases, but it certainly limits the possibilities.

The other issue is speed. You can run a lot of things even on fairly basic hardware, but the token speed is not great. Obviously you can get better hardware to mitigate that but then the cost goes up significantly.

For self hosting, you need a certain amount of throughput to make it worth it to have GPU's running. If you have spiky usage you are either paying a bunch for idle GPU's or you have horrible cold start times.

Privacy wise: The business/enterprise TOS's of all big model providers give enough privacy guarantees for all or at least most use cases. You can also get your own OpenAI infra on Azure for example, I assume with enough scale you can get even more customized contracts and data controls.

Conclusion: Quality, speed, price, and you are able to use the hosted versions even in privacy sensitive settings.

briansun•7h ago
Thanks — I agree with your three big pain points: quality vs hosted SOTA, token speed, and economics/utilization.

Have you run into cases where on‑device still makes sense?

1. Data that is contractually/regulatorily prohibited from being sent to any third‑party processor (no exceptions).

2. Very large datasets where throughput can be low (overnights acceptable) but the cost is high for cloud models.

3. Inputs behind a password-wall that hosted assistants/chatgpt/claude can’t reach and can't do agentic things with them.

gobdovan•18h ago
If you are a company and you want the advantages of a maintained local-like LLM, if your data already lives in AWS, you'll naturally use Bedrock for cost savings. Given most companies are on cloud, it makes sense they won't do a local setup just for the data to just go back on AWS.

For consumers, it actually requires quite powerful systems, and you won't get the same tokens per minute nor the same precision of an online LLM. And online LLMs already have infrastructure in search engine communication and agent-like behavior that simply makes them better for a wider range of tasks.

This covers most people and companies. So it's either local experience is way worse than online (for most practitioners) or that you already have a local-like LLM in the cloud, where everything else of yours already lives. Not much space left for local on my own server/machine.

briansun•7h ago
Wouldn't it be cool to have a local AI agent? It could access search engines and browse any website through a headless browser.
just_human•17h ago
Having worked in a (very) privacy-sensitive environment, the quality of the hosted foundation models are still vastly superior to any open weight model for practical tasks. The foundation model companies (OpenAI, Anthropic, etc) are willing to sign deals with enterprises that offer reasonable protections and keep sensitive data secure, so I don't think privacy or security is a reason why enterprises would shift to open weight models.

That said, I think there is a lot of adoption of open weight for cost-sensitive features built into applications. But i'd argue this is due cost, not privacy.

briansun•7h ago
Thanks for the view from a very privacy‑sensitive environment — agreed that hosted SOTA still leads on broad capability.

Could you share a quick split: which tasks truly require hosted SOTA than open‑weight? I think gpt-oss is quite good for a lot of things.

SMBs can’t get enterprise contracts with OpenAI/Anthropic, so local/open‑weight may be their only viable path — or wait for a hybrid plan.

jaggs•17h ago
Two reasons?

1. Management 2. Scalability

Running your own local AI takes time, expertise and commitment. Right now the ROI is probably not strong enough to warrant the effort.

Couple this to the fact that it's not clear how much local compute power you need, and it's easy to see why companies are hesitating?

Interestingly enough, there are definitely a number of sectors using local AI with gusto. The financial sector comes to mind.

briansun•7h ago
Well put. Management overhead + unclear capacity planning kills many pilots.
pmontra•12h ago
They are still too large to run on a normal laptop. Furthermore there must be room left for doing our job. It's a long way until what we use online will be within the reach of a $2000 laptop, better if a $1000 one. My laptop won't run any of them even at unreasonable speed (actually slowness).
briansun•8h ago
Totally fair. On a normal laptop you also need headroom to do your actual job, and KV cache + context length can eat that quickly.

1•Forgret•9s ago

X: Today we are open-sourcing the latest code used to recommend posts

https://twitter.com/XEng/status/1965226798460887127
1•tosh•28s ago•0 comments

Show HN: A music quiz website – guess the singer and song title

https://songless.xyz
1•countfeng•3m ago•0 comments

Supreme Court lifts restrictions on LA immigration sweeps

https://www.ironmountaindailynews.com/news/2025/09/supreme-court-lifts-restrictions-on-la-immigra...
1•pfexec•3m ago•0 comments

China Energy Transition Review 2025

https://ember-energy.org/latest-insights/china-energy-transition-review-2025/
1•hunglee2•4m ago•0 comments

Rust's Enterprise Breakthrough Year

https://rust-trends.com/newsletter/rust-enterprise-breakthrough-2025/
1•dash2•4m ago•0 comments

Show HN: Unlimited free AI Chat tool

https://mixhubai.com
2•sugusd•4m ago•0 comments

Memory-Centric AI: SanDisk's High Bandwidth Flash

https://www.sandisk.com/company/newsroom/blogs/2025/memory-centric-ai-sandisks-high-bandwidth-fla...
1•Blue_Cosma•7m ago•0 comments

Viruses in the Gut Protect Us and Change with Age and Diet

https://www.scientificamerican.com/article/viruses-in-the-gut-protect-us-and-change-with-age-and-...
1•beardyw•10m ago•0 comments

BasketballBros: A Free Browser-Based Multiplayer Basketball Game

https://www.basketballbros.space/
2•detectmeai•17m ago•1 comments

Can Collations Be Used over Citext?

https://www.cybertec-postgresql.com/en/can-collations-be-used-over-citext/
2•Bogdanp•19m ago•0 comments

Infant Cry Language Analysis and Recognition: An Experimental Approach

https://www.ieee-jas.net/article/doi/10.1109/JAS.2019.1911435
1•amai•21m ago•0 comments

America's Big Agriculture Problem Is Getting Worse [video]

https://www.youtube.com/watch?v=9KXOO3gK5wo
1•johntfella•22m ago•0 comments

The New Math of Quantum Cryptography

https://www.wired.com/story/the-new-math-of-quantum-cryptography/
3•jonbaer•25m ago•0 comments

AI firm Mistral valued at $14 billion as chip giant ASML takes major stake

https://www.cnbc.com/2025/09/09/ai-firm-mistral-valued-at-14-billion-as-asml-takes-major-stake.html
3•voxadam•27m ago•0 comments

Realistic attack vectors against modern ecommerce platforms that defenders miss?

1•iksmel•30m ago•0 comments

Google admits the open web is in 'rapid decline'

https://www.theverge.com/news/773928/google-open-web-rapid-decline
3•pseudolus•39m ago•1 comments

How to Become a Pure Mathematician (Or Statistician)

http://hbpms.blogspot.com/
1•ipnon•40m ago•0 comments

A Billionaire Owner Brought Turmoil and Trouble to Sotheby's

https://www.newyorker.com/magazine/2025/09/01/how-a-billionaire-owner-brought-turmoil-and-trouble...
1•FinnLobsien•43m ago•0 comments

Running a Root DNS Server on FreeBSD from Alpha to Now by Daniel Mahoney

https://toobnix.org/w/hL5BvuZsy5B3PSeW4YzeNa
1•rodrigo975•48m ago•0 comments

Show HN: PDFGate

https://pdfgate.com/
3•byteforge•56m ago•2 comments

ASML and Mistral agree €1.3B blockbuster European AI deal

https://www.ft.com/content/98e78f6b-0ebf-4546-b25f-bf7621e26c8b
3•jamesblonde•58m ago•0 comments

Mistral AI raises €1.7B to accelerate technological progress with AI

https://mistral.ai/news/mistral-ai-raises-1-7-b-to-accelerate-technological-progress-with-ai
40•kgwgk•1h ago•1 comments

Meta hid harms to children from VR products, whistleblowers allege

https://www.theguardian.com/technology/2025/sep/08/meta-virtual-reality-whistleblowers
4•beardyw•1h ago•0 comments

19 Dead in Kathmandu Social Media ban protests

https://kathmandupost.com/national/2025/09/08/nepal-s-gen-z-uprising-explained
1•schmudde•1h ago•0 comments

Jobseekers from Africa being tricked into slavery in Asia's cyberscam compounds

https://www.theguardian.com/global-development/2025/sep/09/cyberslavery-kenya-uganda-ethiopia-sou...
2•beardyw•1h ago•0 comments

David Sacks' rules for success in Trump's Washington

https://www.semafor.com/article/09/08/2025/david-sacks-rules-for-success-in-trumps-washington
1•aspenmayer•1h ago•1 comments

New parametric CAD BREP kernel attempt

https://github.com/mmiscool/BREP
2•mmiscool•1h ago•0 comments

Lea Ypi: How to think about surveillance

https://www.ft.com/content/9e7372b7-002e-41db-823c-7a70ab8d888d
1•bcye•1h ago•0 comments

DeepWiki of Twitter's Recommendation Algorithm

https://deepwiki.com/twitter/the-algorithm
1•spdling2•1h ago•0 comments