I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.
You gotta help me out. What do you see holding it back?
Are you sharing any of your revenue from that $79 license fee with the https://ollama.com/ project that your app builds on top of?
Nice to have a local option, especially for some prompts.
I have a 48GB macbook pro and Gemma3 (one of the abliterated ones) fits my non-code use case perfectly (generating crime stories which the reader tries to guess the killer).
For code, I still call Google to use Gemini.
What I like about ollama is that it provides a self-hosted AI provider that can be used by a variety of things. LM Studio has that too, but you have to have the whole big chonky Electron UI running. Its UI is powerful but a lot less nice than e.g. BoltAI for casual use.
If you're just working as a single user via the OpenAI protocol, you might want to consider koboldcpp. It bundles a GUI launcher, then starts in text-only mode. You can also tell it to just run a saved configuration, bypassing the GUI; I've successfully run it as a system service on Windows using nssm.
https://github.com/LostRuins/koboldcpp/releases
Though there are a lot of roleplay-centric gimmicks in its feature set, its context-shifting feature is singular. It caches the intermediate state used by your last query, extending it to build the next one. As a result you save on generation time with large contexts, and also any conversation that has been pushed out of the context window still indirectly influences the current exchange.
Upon installing the first model offered is google/gemma-3-12b - which in fairness is pretty decent compared to others.
It's not obvious how to show the right sidebar they're talking about, it's the flask icon which turns into a collapse icon when you click it.
I set the MCP up with playwright, asked it to read the top headline from HN and it got stuck on an infinite loop of navigating to Hacker News, but doing nothing with the output.
I wanted to try it out with a few other models, but figuring out how to download new models isn't obvious either, it turned out to be the search icon. Anyway other models didn't fare much better either, some outright ignored the tools despite having the capacity for 'tool use'.
I'd love to learn more about your MCP implementation. Wanna chat?
chisleu•6h ago
Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.
LM Studio is newish, and it's not a perfect interface yet, but it's fantastic at what it does which is bring local LLMs to the masses w/o them having to know much.
There is another project that people should be aware of: https://github.com/exo-explore/exo
Exo is this radically cool tool that automatically clusters all hosts on your network running Exo and uses their combined GPUs for increased throughput.
Like HPC environments, you are going to need ultra fast interconnects, but it's just IP based.
dchest•6h ago
Probably should just use llama.cpp server/ollama and not waste a gig of memory on Electron, but I like GUIs.
minimaxir•6h ago
hnuser123456•54m ago
https://www.pcgamer.com/apple-vp-says-8gb-ram-on-a-macbook-p...
karmakaze•6h ago
incognito124•6h ago
Oof you were NOT joking
noman-land•4h ago
teaearlgraycold•3h ago
sneak•5h ago
I haven’t been using it much. All it has on it is LM Studio, Ollama, and Stats.app.
> Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.
lol, yup. same.
chisleu•5h ago
I'm considering ordering one of these today: https://www.newegg.com/p/N82E16816139451?Item=N82E1681613945...
It looks like it will hold 5 GPUs with a single slot open for infiniband
Then local models might be lower quality, but it won't be slow! :)
kristopolous•4h ago
evo_9•2h ago
Just wondering if Claude 3.7 has seemed differently lately for anyone else? Was my go to for several months, and I'm no fan of OpenAI, but o3 has been rock solid.
teaearlgraycold•5h ago
chisleu•5h ago
I'm interested in using models for code generation, but I'm not expecting much in that regard.
I'm planning to attempt fine tuning open source models on certain tool sets, especially MCP tools.
prettyblocks•5h ago
truemotive•4h ago
prophesi•4h ago
LM Studio isn't FOSS though.
I did enjoy hooking up OpenWebUI to Firefox's experimental AI Chatbot. (browser.ml.chat.hideLocalhost to false, browser.ml.chat.provider to localhost:${openwebui-port})
s1mplicissimus•3h ago
prettyblocks•29m ago
noman-land•4h ago
imranq•4h ago
zackify•4h ago
Get the RTX Pro 6000 for 8.5k with double the bandwidth. It will be way better
marci•50m ago
tymscar•48m ago
The whole point of spending that much money for them is to run massive models, like the full R1, which the Pro 6000 cant
t1amat•6m ago
If the primary use case is input heavy, which is true of agentic tools, there’s a world where partial GPU offload with many channels of DDR5 system RAM leads to an overall better experience. A good GPU will process input many times faster, and with good RAM you might end up with decent output speed still. Seems like that would come in close to $12k?
And there would be no competition for models that do fit entirely inside that VRAM, for example Qwen3 32B.