frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: Tomatic – openrouter AI chat interface

https://github.com/fdietze/tomatic
1•manx•2m ago•0 comments

The iPad Is About to Become a Much Better Video Editor

https://petapixel.com/2025/06/23/the-ipad-is-about-to-become-a-much-better-video-editor/
2•fcpguru•2m ago•0 comments

How One Million Chessboards Works

https://eieio.substack.com/p/how-one-million-chessboards-works
2•leonardinius•4m ago•0 comments

Show HN: An AI agent that debugs your LLM app and submits pull requests

https://github.com/Kaizen-agent/kaizen-agent
1•yuto_1192•7m ago•0 comments

Vite 7.0

https://vite.dev/blog/announcing-vite7
2•joshdavham•8m ago•0 comments

Show HN: Joyspace AI Clips – Automatically Create Short Clips from Long Videos

1•joyspace•8m ago•0 comments

AI May Be Underhyped

https://www.barrons.com/articles/chatgpt-gemini-grok-claude-ranked-how-to-use-fb05b409
2•jdmoreira•10m ago•0 comments

Google Cloud Donates A2A to Linux Foundation

https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/
1•mooreds•11m ago•0 comments

Why Single-Tenant Applications Are Better Than Multi-Tenant SaaS

https://fusionauth.io/blog/single-tenant-scaling
1•mooreds•14m ago•0 comments

We Should Be Taking a Minimum of Two Showers a Day in the Summer

https://www.gq.com/story/two-showers-a-day-clean-summer
2•mooreds•14m ago•0 comments

Why HS2 cost so much

https://martinrobbins.substack.com/p/hs2-and-the-slow-decay-of-britain
1•rwmj•18m ago•0 comments

Show HN: An App that turns your phone into a tour guide

https://tripoclock.com/
1•hopeadoli•18m ago•0 comments

Games Run Faster on SteamOS Than Windows 11, Ars Testing Finds

https://games.slashdot.org/story/25/06/25/2034243/games-run-faster-on-steamos-than-windows-11-ars-testing-finds
1•austinallegro•19m ago•0 comments

Show HN: LuxWeather – ad-free pixel art weather site made with asp.net and Htmx

https://luxweather.com/
2•thisislux•19m ago•0 comments

Federated Credential Management (FedCM) API

https://developer.mozilla.org/en-US/docs/Web/API/FedCM_API
1•bpierre•20m ago•0 comments

Use artifacts to visualize and create AI apps without ever writing code

https://support.anthropic.com/en/articles/11649427-use-artifacts-to-visualize-and-create-ai-apps-without-ever-writing-a-line-of-code
1•ianrahman•27m ago•1 comments

Fear of deployments is the largest tech debt

https://www.aviator.co/podcast/charity-majors-fearless-deployments
1•Liriel•28m ago•0 comments

Restmail – sendmail-compatible CLI for Gmail and outlook

https://github.com/tonymet/restmail
2•tonymet•31m ago•0 comments

Shifting Forces: The Evolving Debate Around Dark Energy

https://undark.org/2025/06/25/evolving-debate-dark-energy/
1•EA-3167•32m ago•0 comments

Aaron Sorkin's The Social Network sequel officially in development

https://www.theguardian.com/film/2025/jun/25/aaron-sorkins-the-social-network-sequel
1•jmsflknr•32m ago•0 comments

CUDA Ray Tracing 2x Faster Than RTX: My CUDA Ray Tracing Journey

https://karimsayedre.github.io/RTIOW.html
3•ibobev•35m ago•0 comments

Spatial learning circuitry fluctuates in step with estrous cycle in mice

https://www.thetransmitter.org/neuroendocrinology/spatial-learning-circuitry-fluctuates-in-step-with-estrous-cycle-in-mice/
2•domofutu•38m ago•0 comments

Alijah Arenas on Cybertruck crash: 'Fighting time' to escape

https://www.espn.com/mens-college-basketball/story/_/id/45583696/alijah-arenas-crash-fighting-escape-burning-truck
2•tusslewake•43m ago•1 comments

The Fastest Motorcycle Hearse in the World

https://motorcyclefunerals.com/suzuki-hayabusa
1•cainxinth•44m ago•0 comments

Joey Swoll, gymcreeps and trolling: inside the TikTok workout wars (2023)

https://www.gq-magazine.co.uk/culture/article/joey-swoll-interview-gymcreeps-trolling-tiktok-gym-wars
2•mellosouls•46m ago•0 comments

"Nobody Expected This": Earth's Rotation Will Speed Up in July and August

https://www.iflscience.com/nobody-expected-this-earths-rotation-will-speed-up-in-july-and-august-bucking-the-downward-trend-79757
2•Bluestein•46m ago•0 comments

Show HN: Bringing back Snacklish (via a bunch of prior samples and iteration)

https://github.com/exogen/snacklish
2•exogen•47m ago•0 comments

Nvidia Ruffles Tech Giants with Move into Cloud Computing

https://www.wsj.com/tech/ai/nvidia-dgx-cloud-computing-28c49748
3•bookofjoe•51m ago•1 comments

Matter vs. Force: Why There Are Two Types of Particles

https://www.quantamagazine.org/matter-vs-force-why-there-are-exactly-two-types-of-particles-20250623/
1•kjhughes•53m ago•1 comments

Meta Beats Copyright Suit from Authors over AI Training on Books

https://news.bloomberglaw.com/litigation/meta-beats-copyright-suit-from-authors-over-ai-training-on-books
3•jmsflknr•56m ago•0 comments
Open in hackernews

LM Studio is now an MCP Host

https://lmstudio.ai/blog/lmstudio-v0.3.17
113•yags•4h ago

Comments

chisleu•4h ago
Just ordered a $12k mac studio w/ 512GB of integrated RAM.

Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.

LM Studio is newish, and it's not a perfect interface yet, but it's fantastic at what it does which is bring local LLMs to the masses w/o them having to know much.

There is another project that people should be aware of: https://github.com/exo-explore/exo

Exo is this radically cool tool that automatically clusters all hosts on your network running Exo and uses their combined GPUs for increased throughput.

Like HPC environments, you are going to need ultra fast interconnects, but it's just IP based.

dchest•4h ago
I'm using it on MacBook Air M1 / 8 GB RAM with Qwen3-4B to generate summaries and tags for my vibe-coded Bloomberg Terminal-style RSS reader :-) It works fine (the laptop gets hot and slow, but fine).

Probably should just use llama.cpp server/ollama and not waste a gig of memory on Electron, but I like GUIs.

minimaxir•3h ago
8 GB of RAM with local LLMs in general is iffy: a 8-bit quantized Qwen3-4B is 4.2GB on disk and likely more in memory. 16 GB is usually the minimum to be able to run decent models without compromising on heavy quantization.
karmakaze•4h ago
Nice. Ironically well suited for non-Apple Intelligence.
incognito124•3h ago
> I'm going to download it with Safari

Oof you were NOT joking

noman-land•2h ago
Safari to download LM Studio. LM Studio to download models. Models to download Firefox.
teaearlgraycold•1h ago
The modern ninite
sneak•3h ago
I already got one of these. I’m spoiled by Claude 4 Opus; local LLMs are slower and lower quality.

I haven’t been using it much. All it has on it is LM Studio, Ollama, and Stats.app.

> Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.

lol, yup. same.

chisleu•3h ago
Yup, I'm spoiled by Claude 3.7 Sonnet right now. I had to stop using opus for plan mode in my Agent because it is just so expensive. I'm using Gemini 2.5 pro for that now.

I'm considering ordering one of these today: https://www.newegg.com/p/N82E16816139451?Item=N82E1681613945...

It looks like it will hold 5 GPUs with a single slot open for infiniband

Then local models might be lower quality, but it won't be slow! :)

kristopolous•2h ago
The GPUs are the hard things to find unless you want to pay like 50% markup
evo_9•45m ago
I was using Claude 3.7 exclusively for coding, but it sure seems like it got worse suddenly about 2–3 weeks back. It went from writing pretty solid code I had to make only minor changes to, to being completely off its rails, altering files unrelated to my prompt, undoing fixes from the same conversation, reinventing db access and ignoring existing coding 'standards' established in the existing codebase. Became so untrustworthy I finally gave OpenAi O3 a try and honestly, I was pretty surprised how solid it has been. I've been using o3 since, and I find it generally does exactly what I ask, esp if you have a well established project with plenty of code for it to reference.

Just wondering if Claude 3.7 has seemed differently lately for anyone else? Was my go to for several months, and I'm no fan of OpenAI, but o3 has been rock solid.

teaearlgraycold•3h ago
What are you going to do with the LLMs you run?
chisleu•3h ago
Currently I'm using gemini 2.5 and claude 3.7 sonnet for coding tasks.

I'm interested in using models for code generation, but I'm not expecting much in that regard.

I'm planning to attempt fine tuning open source models on certain tool sets, especially MCP tools.

prettyblocks•2h ago
I've been using openwebui and am pretty happy with it. Why do you like lm studio more?
truemotive•2h ago
Open WebUI can leverage the built in web server in LM Studio, just FYI in case you thought it was primarily a chat interface.
prophesi•2h ago
Not OP, but with LM Studio I get a chat interface out-of-the-box for local models, while with openwebui I'd need to configure it to point to an OpenAI API-compatible server (like LM Studio). It can also help determine which models will work well with your hardware.

LM Studio isn't FOSS though.

I did enjoy hooking up OpenWebUI to Firefox's experimental AI Chatbot. (browser.ml.chat.hideLocalhost to false, browser.ml.chat.provider to localhost:${openwebui-port})

s1mplicissimus•1h ago
i recently tried openwebui but it was so painful to get it to run with local model. that "first run experience" of lm studio is pretty fire in comparison. can't really talk about actually working with it though, still waiting for the 8GB download
noman-land•2h ago
I love LM Studio. It's a great tool. I'm waiting for another generation of Macbook Pros to do as you did :).
imranq•2h ago
I'd love to host my own LLMs but I keep getting held back from the quality and affordability of Cloud LLMs. Why go local unless there's private data involved?
zackify•2h ago
I love LM studio but I’d never waste 12k like that. The memory bandwidth is too low trust me.

Get the RTX Pro 6000 for 8.5k with double the bandwidth. It will be way better

minimaxir•4h ago
LM Studio has quickly become the best way to run local LLMs on an Apple Silicon Mac: no offense to vllm/ollama and other terminal-based approaches, but LLMs have many levers for tweaking output and sometimes you need a UI to manage it. Now that LM Studio supports MLX models, it's one of the most efficient too.

I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.

nix0n•4h ago
LM Studio is quite good on Windows with Nvidia RTX also.
pzo•3h ago
I just wish they did some facelifting of UI. Right now is too colorfull for me and many different shades of similar colors. I wish they copy some color pallet from google ai studio or from trae or pycharm.
chisleu•3h ago
> I'm not bullish on MCP

You gotta help me out. What do you see holding it back?

minimaxir•2h ago
tl;dr the current hype around it is a solution looking for a problem and at a high level, it's just a rebrand of the Tools paradigm.
mhast•2h ago
It's "Tools as a service", so it's really trying to make tool calling easier to use.
ijk•16m ago
Near as I can tell it's supposed to make calling other people's tools easier. But I don't want to spin up an entire server to invoke a calculator. So far it seems to make building my own local tools harder, unless there's some guidebook I'm missing.
zackify•2h ago
Ollama doesn’t even have a way to customize the context size per model and persist it. LM studio does :)
Anaphylaxis•15m ago
This isn't true. You can `ollama run {model}`, `/set parameter num_ctx {ctx}` and then `/save`. Recommended to `/save {model}:{ctx}` to persist on model update
gregorym•4h ago
I use https://ollamac.com/ to run Ollama and it works great. It has MCP support also.
simonw•4h ago
That's clearly your own product (it links to Koroworld in the footer and you've posted about that on Hacker News in the past).

Are you sharing any of your revenue from that $79 license fee with the https://ollama.com/ project that your app builds on top of?

visiondude•4h ago
LMStudio works surprisingly well on M3 Ultra 64gb and 27b models.

Nice to have a local option, especially for some prompts.

squanchingio•3h ago
I'll be nice to have the MCP servers exposed like LMStudio OpenAI-like endpoints.
patates•3h ago
What models are you using on LM Studio for what task and with how much memory?

I have a 48GB macbook pro and Gemma3 (one of the abliterated ones) fits my non-code use case perfectly (generating crime stories which the reader tries to guess the killer).

For code, I still call Google to use Gemini.

robbru•1h ago
I've been using the Google Gemma QAT models in 4B, 12B, and 27B with LM Studio with my M1 Max. https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat...
api•3h ago
I wish LM Studio had a pure daemon mode. It's better than ollama in a lot of ways but I'd rather be able to use BoltAI as the UI, as well as use it from Zed and VSCode and aider.

What I like about ollama is that it provides a self-hosted AI provider that can be used by a variety of things. LM Studio has that too, but you have to have the whole big chonky Electron UI running. Its UI is powerful but a lot less nice than e.g. BoltAI for casual use.

SparkyMcUnicorn•3h ago
There's a "headless" checkbox in settings->developer
diggan•32m ago
Still, you need to install and run the AppImage at least once to enable the "lms" cli which can later be used. Would be nice with a completely GUI-less installation/use method too.
b0a04gl•3h ago
claude going mcp over remote kinda normalised the protocol for inference routing. now with lmstudio running as local mcp host, you can just tunnel it (cloudflared/ngrok), drop a tiny gateway script and boom your laptop basically acts like a mcp node in hybrid mesh. short prompts hit qwen local, heavier ones go claude. with same payload and interface we can actually get multihost local inference clusters wired together by mcp
politelemon•2h ago
The initial experience with LMStudio and MCP doesn't seem to be great, I think their docs could do with a happy path demo for newcomers.

Upon installing the first model offered is google/gemma-3-12b - which in fairness is pretty decent compared to others.

It's not obvious how to show the right sidebar they're talking about, it's the flask icon which turns into a collapse icon when you click it.

I set the MCP up with playwright, asked it to read the top headline from HN and it got stuck on an infinite loop of navigating to Hacker News, but doing nothing with the output.

I wanted to try it out with a few other models, but figuring out how to download new models isn't obvious either, it turned out to be the search icon. Anyway other models didn't fare much better either, some outright ignored the tools despite having the capacity for 'tool use'.

maxcomperatore•1h ago
good.