frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•1m ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•1m ago•0 comments

Mario Tchou

https://en.wikipedia.org/wiki/Mario_Tchou
1•simonebrunozzi•2m ago•0 comments

Does Anyone Even Know What's Happening in Zim?

https://mayberay.bearblog.dev/does-anyone-even-know-whats-happening-in-zim-right-now/
1•mugamuga•3m ago•0 comments

The last Morse code maritime radio station in North America [video]

https://www.youtube.com/watch?v=GzN-D0yIkGQ
1•austinallegro•5m ago•0 comments

Show HN: Hacker Newspaper – Yet another HN front end optimized for mobile

https://hackernews.paperd.ink/
1•robertlangdon•6m ago•0 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
1•novoreorx•14m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
1•mahirsaid•16m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•17m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
1•XzetaU8•24m ago•0 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•32m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•35m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•37m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•41m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•41m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•42m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•43m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•53m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•56m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•59m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
2•pentagrama•1h ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•1h ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
4•lostlogin•1h ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•1h ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•1h ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•1h ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•1h ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•1h ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•1h ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•1h ago•1 comments
Open in hackernews

Ask HN: Why aren't local LLMs used as widely as we expected?

5•briansun•5mo ago
On paper, local LLMs seem like a perfect fit for privacy‑sensitive work: no data leaves the machine, no margin cost, and can access local data. Think law firms, financial agents, or companies where IT bans browser extensions and disallows cloud AI tools on work machines. Given that, I’d expect local models to be everywhere by now—yet they still feel niche.

I’m trying to understand what’s in the way. My hypotheses (and I’d love corrections):

1) People optimize for output quality over privacy. 2) Hardware is far behind. 3) The tool people truly want (e.g., “a trustworthy, local‑only browser extension”) have yet to emerge. 4) No one has informed your lawyer about this—for now. 5) Or: adoption is already happening, just not visible.

It’s possible many teams are quietly using Ollama in daily work, and we just don’t hear about it.

Comments

codeptualize•5mo ago
I think there are two cases:

1. Self hosting

2. Running locally on device

I have tried both, and find myself not using either.

For both the quality is less than the top performing models in my experience. Part of it is the models, part might be the application layer (chatgpt/claude). It would still work for a lot of use cases, but it certainly limits the possibilities.

The other issue is speed. You can run a lot of things even on fairly basic hardware, but the token speed is not great. Obviously you can get better hardware to mitigate that but then the cost goes up significantly.

For self hosting, you need a certain amount of throughput to make it worth it to have GPU's running. If you have spiky usage you are either paying a bunch for idle GPU's or you have horrible cold start times.

Privacy wise: The business/enterprise TOS's of all big model providers give enough privacy guarantees for all or at least most use cases. You can also get your own OpenAI infra on Azure for example, I assume with enough scale you can get even more customized contracts and data controls.

Conclusion: Quality, speed, price, and you are able to use the hosted versions even in privacy sensitive settings.

briansun•5mo ago
Thanks — I agree with your three big pain points: quality vs hosted SOTA, token speed, and economics/utilization.

Have you run into cases where on‑device still makes sense?

1. Data that is contractually/regulatorily prohibited from being sent to any third‑party processor (no exceptions).

2. Very large datasets where throughput can be low (overnights acceptable) but the cost is high for cloud models.

3. Inputs behind a password-wall that hosted assistants/chatgpt/claude can’t reach and can't do agentic things with them.

gobdovan•5mo ago
If you are a company and you want the advantages of a maintained local-like LLM, if your data already lives in AWS, you'll naturally use Bedrock for cost savings. Given most companies are on cloud, it makes sense they won't do a local setup just for the data to just go back on AWS.

For consumers, it actually requires quite powerful systems, and you won't get the same tokens per minute nor the same precision of an online LLM. And online LLMs already have infrastructure in search engine communication and agent-like behavior that simply makes them better for a wider range of tasks.

This covers most people and companies. So it's either local experience is way worse than online (for most practitioners) or that you already have a local-like LLM in the cloud, where everything else of yours already lives. Not much space left for local on my own server/machine.

briansun•5mo ago
Wouldn't it be cool to have a local AI agent? It could access search engines and browse any website through a headless browser.
just_human•5mo ago
Having worked in a (very) privacy-sensitive environment, the quality of the hosted foundation models are still vastly superior to any open weight model for practical tasks. The foundation model companies (OpenAI, Anthropic, etc) are willing to sign deals with enterprises that offer reasonable protections and keep sensitive data secure, so I don't think privacy or security is a reason why enterprises would shift to open weight models.

That said, I think there is a lot of adoption of open weight for cost-sensitive features built into applications. But i'd argue this is due cost, not privacy.

briansun•5mo ago
Thanks for the view from a very privacy‑sensitive environment — agreed that hosted SOTA still leads on broad capability.

Could you share a quick split: which tasks truly require hosted SOTA than open‑weight? I think gpt-oss is quite good for a lot of things.

SMBs can’t get enterprise contracts with OpenAI/Anthropic, so local/open‑weight may be their only viable path — or wait for a hybrid plan.

jaggs•5mo ago
Two reasons?

1. Management 2. Scalability

Running your own local AI takes time, expertise and commitment. Right now the ROI is probably not strong enough to warrant the effort.

Couple this to the fact that it's not clear how much local compute power you need, and it's easy to see why companies are hesitating?

Interestingly enough, there are definitely a number of sectors using local AI with gusto. The financial sector comes to mind.

briansun•5mo ago
Well put. Management overhead + unclear capacity planning kills many pilots.
pmontra•5mo ago
They are still too large to run on a normal laptop. Furthermore there must be room left for doing our job. It's a long way until what we use online will be within the reach of a $2000 laptop, better if a $1000 one. My laptop won't run any of them even at unreasonable speed (actually slowness).
briansun•5mo ago
Totally fair. On a normal laptop you also need headroom to do your actual job, and KV cache + context length can eat that quickly.