frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Ask HN: What's Your Useful Local LLM Stack?

53•Olshansky•8h ago
What I’m asking HN:

What does your actually useful local LLM stack look like?

I’m looking for something that provides you with real value — not just a sexy demo.

---

After a recent internet outage, I realized I need a local LLM setup as a backup — not just for experimentation and fun.

My daily (remote) LLM stack:

  - Claude Max ($100/mo): My go-to for pair programming. Heavy user of both the Claude web and desktop clients.

  - Windsurf Pro ($15/mo): Love the multi-line autocomplete and how it uses clipboard/context awareness.

  - ChatGPT Plus ($20/mo): My rubber duck, editor, and ideation partner. I use it for everything except code.
Here’s what I’ve cobbled together for my local stack so far:

Tools

  - Ollama: for running models locally

  - Aider: Claude-code-style CLI interface

  - VSCode w/ continue.dev extension: local chat & autocomplete
Models

  - Chat: llama3.1:latest

  - Autocomplete: Qwen2.5 Coder 1.5B

  - Coding/Editing: deepseek-coder-v2:16b
Things I’m not worried about:

  - CPU/Memory (running on an M1 MacBook)

  - Cost (within reason)

  - Data privacy / being trained on (not trying to start a philosophical debate here)
I am worried about:

  - Actual usefulness (i.e. “vibes”)

  - Ease of use (tools that fit with my muscle memory)

  - Correctness (not benchmarks)

  - Latency & speed
Right now: I’ve got it working. I could make a slick demo. But it’s not actually useful yet.

---

Who I am

  - CTO of a small startup (5 amazing engineers)

  - 20 years of coding (since I was 13)

  - Ex-big tech

Comments

sshine•7h ago
I just use Claude Code ($20/mo.)

Sometimes with Vim, sometimes with VSCode.

Often just with a terminal for testing the stuff being made.

orthecreedence•7h ago
What plugins/integrations are you using in vim?
quxbar•7h ago
IMO you're better off investing in tooling that works with or without LLMs: - extremely clean, succinct code - autogenerated interfaces from openAPI spec - exhaustive e2e testing

Once that is set up, you can treat your agents like (sleep-deprived) junior devs.

mhogers•7h ago
Autogenerated interfaces from openAPI spec is so key - agents are extremely good at creating React code based on these interfaces (+ typescript + tests + lints.. for extra feedback loops etc..)
codybontecou•7h ago
It seems like you have a decent local stack in place. Unfortunately these systems feel leagues behind Claude Clode and the current SOTA agentic coding. But they're great for referencing simple search like syntax.

Where I've found the most success with local models is with image generation, text-to-speech, and text-to-text translations.

bix6•7h ago
Following as I haven’t found a solution. To me the local models feel outdated and no internet lookup causes issues.
alexhans•6h ago
Unless I'm misremembering, aider with Playwright [1] works with local models, so you can scrape the web.

Depending on your hardware you could do something like:

aider --model "ollama_chat/deepseek-r1:14b" --editor-model "ollama_chat/qwen2.5-coder:14b"

[1] - https://aider.chat/docs/install/optional.html#enable-playwri...

ashwinsundar•7h ago
I just go outside when my internet is down for 15 minutes a year. Or tether to my cell phone plan if the need is urgent.

I don't see the point of a local AI stack, outside of privacy or some ethical concerns (which a local stack doesn't solve anyway imo). I also *only* have 24GB of RAM on my laptop, which it sounds like isn't enough to run any of the best models. Am I missing something by not upgrading and running a high-performance LLM on my machine?

filchermcurr•7h ago
I would say cost is a factor. Maybe not for OP, but many people aren't able to spend $135 a month on AI services.
ashwinsundar•7h ago
Does the cost of a new computer not get factored in? I think I would need to spend $2000+ to run a decent model locally, and even then I can only run open source models

Not to mention, running a giant model locally for hours a day is sure to shorten the lifespan of the machine…

dpoloncsak•6h ago
$2000 for a new machine is only a little over a year in AI costs for OP
haiku2077•6h ago
The computer is a general purpose tool, though. You can play games, edit video and images, and self-host a movie/TV collection with real time transcoding with the same hardware. Many people have powerful PCs for playing games and running professional creative software already.

There's no reason running a model would shorten a machine's lifespan. PSUs, CPUs, motherboards, GPUs and RAM will all be long obsolete before they wear out even under full load. At worst you might have to swap thermal paste/pads a couple of years sooner. (A tube of paste is like, ten bucks.)

outworlder•5h ago
> Not to mention, running a giant model locally for hours a day is sure to shorten the lifespan of the machine…

That is not a thing. Unless there's something wrong (badly managed thermals, an undersized PSU at the limit of its capacity, dusty unfiltered air clogging fans, aggressive overclocking), that's what your computer is built for.

Sure, over a couple of decades there's more electromigration than would otherwise have happened at idle temps. But that's pretty much it.

> I think I would need to spend $2000+ to run a decent model locally

Not really. Repurpose second hand parts and you can do it for 1/4 of that cost. It can also be a server and do other things when you aren't running models.

FuriouslyAdrift•7h ago
I use Reasoner v1 (based on Qwen 2.5-Coder 7B) running locally for programming help/weird ideas/etc. $0
shock•5h ago
There were many hits when I searched for "Reasoner v1 (based on Qwen 2.5-Coder 7B)". Do you have a link to the one you are using?
FuriouslyAdrift•4h ago
Nomic GPT4All https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-co...

https://github.com/nomic-ai/gpt4all/releases

throwawayffffas•7h ago
What I have setup:

- Ollama: for running llm models

- OpenWebUI: For the chat experience https://docs.openwebui.com/

- ComfyUI: For Stable diffusion

What I use:

Mostly ComfyUI and occasionally the llms through OpenWebUI.

I have been meaning to try Aider. But mostly I use claude at great expense I might add.

Correctness is hit and miss.

Cost is much lower and latency is better or at least on par with cloud model at least on the serial use case.

Caveat, in my case local means running on a server with gpus in my lan.

alkh•7h ago
I personally found Qwen2.5 Coder 7B to be on pair with deepseek-coder-v2:16b(but consumes less RAM on inference and faster), so that's what I am using locally. I actually created a custom model called "oneliner" that uses Qwen2.5 Coder 7B as a base and this system prompt:

SYSTEM """ You are a professional coder. You goal is to reply to user's questions in a consise and clear way. Your reply must include only code orcommands , so that the user could easily copy and paste them.

Follow these guidelines for python: 1) NEVER recommend using "pip install" directly, always recommend "python3 -m pip install" 2) The following are pypi modules: ruff, pylint, black, autopep8, etc. 3) If the error is module not found, recommend installing the module using "python3 -m pip install" command. 4) If activate is not available create an environment using "python3 -m venv .venv". """

I specifically use it for asking quick questions in terminal that I can copy & paste straight away(for ex. about git). For heavy-lifting I am using ChatGPT Plus(my own) + Github Copilot(provided by my company) + Gemini(provided by my company as well).

Can someone explain how one can set up autocomplete via ollama? That's something I would be interested to try.

CamperBob2•6h ago
NEVER recommend using "pip install" directly, always recommend "python3 -m pip install"

Just out of curiosity, what's the difference?

Seems like all the cool kids are using uv.

th0ma5•6h ago
You mean to say that there is a lot of hype for uv because it is nice and quick but also gives an easy rhetorical win for junior people in any discussion about packaging in Python currently, so obviously that's going to be very popular even if it doesn't work for everyone.

The difference is to try to decouple the environment from the runtime essentially.

alkh•6h ago
I only recently switched to uv and previously used pyenv, so this was more relevant to me before. There is a case when pip might not be pointing to the right python version, while `python3 -m pip` ensures you use the same one as your environment. For me it is mostly a habbit :)
jdthedisciple•6h ago
uv? guess I'm old school.

pip install it is for me

instagib•7h ago
It looks like continue.dev has a RAG implementation but for other files something else? PDF, word, and other languages.

I’ve been going thru some of the neovim plugins for local llm support.

clvx•6h ago
In a related subject, what’s the best hardware to run local LLM’s for this use case? Assuming a budget of no more of $2.5K.

And, is there an open source implementation of an agentic workflow (search tools and others) to use it with local LLM’s?

seanmcdirmid•6h ago
I got a M3 max (the higher end one) with 64GB/ram macbook pro a while back for $3k, might be cheaper now now that the M3 ultra is out.
haiku2077•6h ago
I'm using Zed which supports Ollama on my M4 Macs.

https://zed.dev/blog/fastest-ai-code-editor

prettyblocks•6h ago
You can build a pretty good PC with a used 3090 for that budget. It will outperform anything else in terms of speed. Otherwise, you can get something like an m4 pro mac with 48gb ram.
apparent•31m ago
I've wondered about this also. I have an MBA and like that it's lightweight and relatively cheap. I could buy a MBP and max out the RAM, but I think getting a Mac mini with lots of RAM could actually make more sense. Has anyone set up something like this to make it available to their laptop/iPhone/etc.?

Seems like there would be cost advantages and always-online advantages. And the risk of a desktop computer getting damaged/stolen is much lower than for laptops.

timr•6h ago
I use Copilot, with the occasional free query to the other services. During coding, I mostly use Claude Sonnet 3.7 or 4 in agent mode, but Gemini 2.5 Pro is a close second. ChatGPT 4o is useless except for Q&A. I see no value in paying more -- the utility rapidly diminishes, because at this point the UI surrounding the models is far less important than the models themselves, which in turn are generally less important than the size of their context windows. Even Claude is only marginally better than Gemini (at coding), and they all suck to the point that I wouldn't trust any of them without reviewing every line. Far better to just pick a tool, get comfortable with it, and not screw around too much.

I don't understand people who pay hundreds of dollars a month for multiple tools. It feels like audiophiles paying $1000 for a platinum cable connector.

th0ma5•6h ago
For sure when people don't understand the fundamentals (or in the case of LLMs they are unknowable) then all you have is superstition.
650REDHAIR•4h ago
Why was this flagged?
ttkciar•1h ago
Senior software engineer with 46 years of experience (since I was 7). LLM inference hasn't been too useful for me for writing code, but it has proven very useful for explaining my coworkers' code to me.

Recently I had Gemma3-27B-it explain every Python script and library in a repo with the command:

$ find -name '*.py' -print -exec /home/ttk/bin/g3 "Explain this code in detail:\n\n`cat {}`" \; | tee explain.txt

There were a few files it couldn't figure out without other files, so I ran a second pass with those, giving it the source files it needed to understand source files that used them. Overall, pretty easy, and highly clarifying.

My shell script for wrapping llama.cpp's llama-cli and Gemma3: http://ciar.org/h/g3

That script references this grammar file which forces llama.cpp to infer only ASCII: http://ciar.org/h/ascii.gbnf

Cost: electricity

I've been meaning to check out Aider and GLM-4, but even if it's all it's cracked up to be, I expect to use it sparingly. Skills which aren't exercised are lost, and I'd like to keep my programming skills sharp.

GPUHammer

https://gpuhammer.com/
1•jonbaer•54s ago•0 comments

Nvidia chips become the first GPUs to fall to Rowhammer bit-flip attacks

https://arstechnica.com/security/2025/07/nvidia-chips-become-the-first-gpus-to-fall-to-rowhammer-bit-flip-attacks/
1•jonbaer•1m ago•0 comments

Creating a Colorscheme

https://robinroses.xyz/blog/creating-a-colorscheme/
2•comfysage•8m ago•0 comments

What Were the Earliest Laws Like?

https://worldhistory.substack.com/p/what-were-the-earliest-laws-really
1•crescit_eundo•10m ago•0 comments

Claude is kicking ChatGPT's butt (in one thing)

https://ben-mini.com/2025/claude-is-kicking-chatgpts-butt
3•bewal416•16m ago•0 comments

Google WiFi Pro: Glitching from Root to EL3: Part 1 – Characterization

https://raelize.com/blog/google-wifi-pro-glitching-from-root-to-el3-part-1-characterization/
2•timschumi•17m ago•1 comments

GitHub: Social login with Google is now generally available

https://github.blog/changelog/2025-07-15-social-login-with-google-is-now-generally-available/
1•tech234a•20m ago•0 comments

Trade Signal

https://www.investopedia.com/terms/t/trade-signal.asp
1•ibobev•27m ago•0 comments

Introduction to Random Graphs [pdf]

https://www.math.cmu.edu/~af1p/BOOK.pdf
2•ibobev•28m ago•1 comments

Open Problems in Geometry of Curves and Surfaces [pdf]

https://ghomi.math.gatech.edu/Papers/op.pdf
2•ibobev•29m ago•0 comments

The FBI's Jeffrey Epstein Prison Video Had Nearly 3 Minutes Cut Out

https://www.wired.com/story/the-fbis-jeffrey-epstein-prison-video-had-nearly-3-minutes-cut-out/
3•slantedview•30m ago•0 comments

Formal Security & Verification of Cryptographic Protocol Implementations in Rust

https://eprint.iacr.org/2025/980
1•matt_d•36m ago•0 comments

Drew Saur on the Commodore 64

https://theprogressivecio.com/the-commodore-64-made-a-difference/
1•Bogdanp•36m ago•0 comments

OpenAI Vulnerability: 48 Days, No Response

https://requilence.any.org/open-ai-vulnerability-responsible-disclosure
43•requilence•36m ago•4 comments

How bad are search results? Let's compare

https://danluu.com/seo-spam/
2•warrenm•37m ago•0 comments

Chatbot, the Evolution of Conversational Software

https://www.interlogica.it/en/insight-en/chatbot-history/
1•Bluestein•37m ago•0 comments

DeepMind AI staff tied to "aggressive" noncompete – Offering year-long PTO

https://www.windowscentral.com/software-apps/work-productivity/deepmind-noncompete-clause-rival-labs
2•Bluestein•38m ago•1 comments

Billionaires Convince Themselves Chatbots Close to Making Scientific Discoveries

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
5•maartenscholl•47m ago•2 comments

I Tried Grok's Built-In Anime Companion and It Called Me a Twat

https://www.wired.com/story/elon-musk-xai-ai-companion-ani/
2•coloneltcb•48m ago•1 comments

Mistral Releases Voxtral: Open Source Speech Understanding Models (3B and 24B)

https://huggingface.co/mistralai
2•yanng404•56m ago•0 comments

Behind the Streams: Three Years of Live at Netflix

https://netflixtechblog.com/behind-the-streams-live-at-netflix-part-1-d23f917c2f40?source=social.linkedin&_nonce=QaDyAeai
1•mfiguiere•1h ago•0 comments

Gauging Light Pollution: The Bortle Dark-Sky Scale

https://skyandtelescope.org/astronomy-resources/light-pollution-and-astronomy-the-bortle-dark-sky-scale/
2•dskhatri•1h ago•0 comments

Implantable device could save diabetes patients from dangerously low blood sugar

https://medicalxpress.com/news/2025-07-implantable-device-diabetes-patients-dangerously.html
1•PaulHoule•1h ago•0 comments

Americans' new tax rates depend on who they are and what they do

https://news.bloomberglaw.com/daily-tax-report/americans-new-tax-rates-depend-on-who-they-are-and-what-they-do
1•hhs•1h ago•0 comments

Ask HN: Relevant Java programming language in this new world of AI

1•rammy1234•1h ago•0 comments

I'm a Genocide Scholar. I Know It When I See It

https://www.nytimes.com/2025/07/15/opinion/israel-gaza-holocaust-genocide-palestinians.html
8•lyu07282•1h ago•5 comments

Nuxt v4

https://nuxt.com/blog/v4
1•2sf5•1h ago•0 comments

Ask HN: What is the best way to learn 3D modeling for 3D printing?

1•wand3r•1h ago•0 comments

Huawei's star AI model was built on burnout and plagiarism

https://the-open-source-ward.ghost.io/the-pangu-illusion-how-huaweis-star-ai-model-was-built-on-burnout-betrayal-and-open-source-theft/
23•avervaet•1h ago•11 comments

Steve Albini interview by Billy Hell (2005)

https://www.furious.com/perfect/shellac.html
1•rufus_foreman•1h ago•0 comments