frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Second Half of the Chessboard

https://joshs.bearblog.dev/the-second-half-of-the-chessboard-draft/
1•psychedare•39s ago•0 comments

Miller – CLI tool for querying, shaping, and reformatting data in many formats

https://miller.readthedocs.io/en/6.16.0/
1•smartmic•1m ago•0 comments

What Happened in El Paso? – By James Fallows

https://fallows.substack.com/p/what-happened-in-el-paso
1•MaysonL•1m ago•0 comments

I made a real BMO local AI agent with a Raspberry Pi and Ollama

https://www.youtube.com/watch?v=l5ggH-YhuAw
1•emigre•5m ago•0 comments

Show HN: Let AI agents try things without consequences

https://github.com/multikernel/branching
1•wang_cong•5m ago•0 comments

California Drastically Reduces Creditor Exemptions for Qualified Accounts (2024)

https://www.forbes.com/sites/jayadkisson/2024/10/07/california-drastically-reduces-creditor-exemp...
1•dataflow•10m ago•0 comments

Show HN: A tool to keep dotfiles and system configs in sync with a Git repo

https://github.com/senotrusov/etcdotica
1•senotrusov•11m ago•0 comments

Show HN: Design Memory – Extract design systems from live websites via CLI

https://github.com/memvid/design-memory
1•saleban1031•13m ago•0 comments

Kimi Claw

https://twitter.com/Kimi_Moonshot/status/2023029674549596301
1•tosh•14m ago•0 comments

Show HN: GPU Perpetual Futures Prototype

https://github.com/zacharyfrederick/compex
1•ozzymandiaz96•19m ago•0 comments

Show HN: DoScript – Automation language with English-like syntax

https://github.com/TheServer-lab/DoScript
1•server-lab•20m ago•0 comments

Ideas for an Agent-Oriented Programming Language

https://davi.sh/blog/2026/02/markov-ideas/
1•davish•20m ago•0 comments

Show HN: I've Seen the Future of the Software "Engineer" Gig – It's Orwellian AF

3•burnerToBetOut•25m ago•0 comments

Attack. Attack. Attack

https://twitter.com/ADoricko/status/1868113416805814530
1•mold_aid•26m ago•1 comments

Dell XPS Core Ultra 7 355 Panther Lake: Still great, but not nearly as special

https://www.notebookcheck.net/Dell-XPS-14-Core-Ultra-7-355-review-Still-great-but-not-nearly-as-s...
1•cromka•27m ago•1 comments

DDD: Back to Basics

https://docs.eventsourcingdb.io/blog/2026/02/16/ddd-back-to-basics/
1•goloroden•28m ago•0 comments

Show HN: OpenContext – Bring Your Own Coding Agent, Local-First, No Vendor Lock

https://github.com/adityak74/opencontext
1•akarnam37•30m ago•0 comments

We Uncovered the Scheme Keeping Grocery Prices High [video]

https://www.youtube.com/watch?v=odhVF_xLIQA
3•dataflow•31m ago•0 comments

Show HN: iherb-CLI – An agent-optimized CLI for AI-driven supplement research

https://github.com/SeverinAlexB/iherb-cli
1•sebubu•32m ago•0 comments

Is End-to-End Encryption Optional for Large Groups?

https://soatok.blog/2026/02/14/is-end-to-end-encryption-optional-for-large-groups/
1•iamnothere•34m ago•0 comments

Defer Available in GCC and Clang

https://gustedt.wordpress.com/2026/02/15/defer-available-in-gcc-and-clang/
2•ingve•35m ago•0 comments

USB overclock Linux kernel module

https://github.com/p0358/usb_oc-dkms
1•rhim•36m ago•0 comments

AI-enabled stethoscope twice as efficient at detecting heart disease

https://www.escardio.org/news/press/press-releases/ai-stethoscope/
1•geox•39m ago•2 comments

Show HN: Nomousemode – keyboard window switcher for macOS

https://nomousemode.vercel.app/
1•ahalurooji•41m ago•0 comments

Utah homes are 3.5x the size of the typical British one

https://brilliantmaps.com/home-size-us-europe/
3•delichon•42m ago•2 comments

Show HN: Please hack my C webserver (it's a collaborative whiteboard)

https://ced.quest/draw/
1•cedric_h•43m ago•0 comments

Show HN: Refine.tools – 10 free AI career tools, no signup, no data stored

https://www.refine.tools/
1•HarakiriGod•46m ago•0 comments

Situate Your Essay

https://www.overcomingbias.com/p/situate-your-essay
2•paulpauper•46m ago•0 comments

The Philosopher of Games

https://www.honest-broker.com/p/the-philosopher-of-games
1•paulpauper•47m ago•0 comments

MicroFab – Chip Automation Game

https://microfabgame.co.uk
3•flirp•47m ago•1 comments
Open in hackernews

Where Does Ollama run glm-5:cloud Run? And other Security Blunders

https://docs.ollama.com/cloud
3•coolguysailer•1h ago

Comments

coolguysailer•1h ago
My security guy just proudly explained that he was running nanoclaw locally with glm-5... I asked how much memory his mac m1 has and he answered "16GB". I asked how many params the model was and he said "don't know" I asked what ollama command he was using "ollama run glm-5:cloud"

This led me down the rabbit-hole of just how insidious this local-first branding for cloud models actually is.

Ollama built its reputation on a simple promise: run large language models locally, keep your data on your machine. It's a great tool. I use it. Millions of developers use it. The whole brand is "local-first AI." Then, quietly, Ollama shipped cloud models.

If you visit ollama.com/library/glm-5:cloud, you'll find GLM-5: a 744 billion parameter model built by Z.ai (formerly Zhipu AI), a Chinese AI lab. The :cloud tag means when you run it, your prompts leave your machine and get processed on remote GPUs somewhere. The command looks the same. The API is the same. Your terminal doesn't scream "WARNING: YOUR DATA IS LEAVING YOUR COMPUTER." It just works.

I started asking basic questions about this. I couldn't find answers to any of them.

The UX problem is the real danger

Here's what makes this especially concerning: the developer experience is designed to make local and cloud feel identical.

# This runs locally on your machine ollama run llama3:8b

# This sends your prompt to unknown infrastructure ollama run glm-5:cloud

Same CLI. Same API endpoint (localhost:11434). Same libraries. Same everything except one keeps your data on your machine and the other sends it somewhere you can't verify.

When you run ollama ls, cloud models show up alongside local models. The only visual difference is a - where the file size would be, and the :cloud tag in the name. There's no warning, no confirmation prompt, no "you are about to send data externally."

If you're using Ollama for local inference, nothing has changed. The local tool is still solid. But if you or anyone on your team is using :cloud models, you should be asking:

Do you know where your prompts are going? Not "Ollama's cloud" the actual datacenter, provider, and jurisdiction.

Are you sending PII through cloud models? Resumes, medical records, financial data, customer information any of this flowing through an unaudited cloud endpoint is a compliance risk. Do you have controls to prevent accidental cloud usage? Ollama offers a local-only mode (OLLAMA_NOCLOUD=1), but it's opt-in. The default allows cloud.

What's your fallback if Ollama's cloud gets compromised? With no SOC 2, no disclosed architecture, and a 21-person team, the blast radius of a breach could be significant.

The local-first branding becomes a trojan horse for cloud inference. Most developers won't read the changelog. Most won't notice the :cloud tag. Most won't ask where the compute is.

If you're running cloud inference for millions of developers, disclose where it runs. Name the datacenter provider, the jurisdiction, and whether inference stays on hardware you control or gets proxied to model providers. Get a third-party audit and publish it. And make cloud opt-in. Require explicit confirmation before a prompt leaves the user's machine, not a tag they might not notice.

Until then, treat :cloud the way you'd treat any unaudited third-party API: assume your data is being logged, and don't send anything you wouldn't post publicly.