frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•4m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
1•onurkanbkrc•5m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•5m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•9m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•11m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•11m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•11m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•12m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•13m ago•1 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•15m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•17m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•20m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•20m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•20m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•23m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•26m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•29m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•29m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•31m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•31m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•35m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•37m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•40m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•41m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•46m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•48m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•51m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•51m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•52m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•57m ago•0 comments
Open in hackernews

Cline and LM Studio: the local coding stack with Qwen3 Coder 30B

https://cline.bot/blog/local-models
80•Terretta•5mo ago

Comments

BoredPositron•5mo ago
I'll have to admit I was sceptical about the 30B benchmarks but after testing it over the weekend I'll have to admit it's pretty good. It needs more help in architecture related questions but for coding well defined tasks (for me primarily python) it's on par with the commercial models.
johnisgood•5mo ago
Hardware requirements?

"What you need" only includes software requirements.

jszymborski•5mo ago
30B runs at a reasonable speed on my desktop which has an RTX 2080 (8gb VRAM) and 32Gb of RAM.
DrAwdeOccarim•5mo ago
The author says 36GB unified ram in the article. I run the same memory M3 Pro and LM Studio daily with various models up to the 30b parameter one listed and it flies. Can’t differentiate between my OAi chats vs locals aside from modern context, though I have puppeteer MCP which works well for web search and site-reading.
Havoc•5mo ago
30B class model should run on a consumer 24gb card when quantised though would need pretty aggressive quant to make room for context. Don’t think you’ll get the full 256k context though

So about 700 bucks for a 3090 on eBay

magicalhippo•5mo ago
I have a 5070 Ti and a 2080 Ti, but running Windows so roughly 25-26 GB available. With Flash Attention enabled, I can just about squeeze in Qwen3-Coder-30B-A3B-Instruct-UD-Q4_K_XL from Unsloth with 64k context entirely on the GPUs.

With a 3090 I guess you'd have to reduce context or go for a slightly more aggressive quantization level.

Summarizing llama-arch.cpp which is roughly 40k tokens I get ~50 tok/sec generation speed and ~14 seconds to first token.

For short prompts I get more like ~90 tok/sec and <1 sec to first token.

thecolorblue•5mo ago
I am running it on an M1 Max.
jasonjmcghee•5mo ago
I took the time to build an agent from scratch in rust, copying a lot of ideas from claude code, and using Qwen3 Coder 30B - 3.3B does really well with it. Replicating the str_replace / text editor tools, bash tool, and todo list and a bit of prompting engineering goes really far.

I didn't do anything fancy and found it to do much better than the experience I had with codex cli and similar quality to Claude Code if I used sonnet or opus.

Honestly the cli stuff was the hardest part but I chose not to use something like crossterm.

ptrj_•5mo ago
How have you found the current experience of (async) networking in Rust? This is something which is stupidly easy out-of-the-box in Python -- semi-seriously, async/await in Python was _made_ for interacting w/ a chat completions/messages API.

(As an aside, my "ideal" language mix would be a pairing of Rust with Python, though the PyO3 interface could be improved.)

Would also love to learn more about your Rust agent + Qwen3!

jasonjmcghee•5mo ago
I would pick rust for doing async over python every time, if it's the only consideration.

In python there are hidden sharp edges and depending on what dependencies you use you can get into deadlocks in production without ever knowing you were in danger.

Rust has traits to protect against this. Async in rust is great.

I'd do something like:

let (tx, rx) = std::sync::mpsc::channel(); thread::spawn(move || { // blocking request let response = reqwest::blocking::get(url).unwrap(); tx.send(response.text().unwrap()); });

Or

let (tx, mut rx) = tokio::sync::mpsc::channel(100); tokio::spawn(async move { let response = client.get(url).send().await; tx.send(response).await; });

ptrj_•5mo ago
> In python there are hidden sharp edges and depending on what dependencies you use you can get into deadlocks in production without ever knowing you were in danger.

I've heard of deadlocks when using aiohttp or maybe httpx (e.g. due to hidden async-related globals), but have never managed myself to get any system based on asyncio + concurrent.futures + urllib (i.e. stdlib-only) to deadlock, including w/ some mix of asyncio and threading locks.

thecolorblue•5mo ago
I just ran a test giving the same prompt to claude, gemini, grok and qwen3 coder running locally. Qwen did great by last years standards, and was very useful in building out boilerplate code. That being said, if you looked at the code side by side with cloud hosted models, I don't think anyone would pick Qwen.

If you have 32gb of memory you are not using, it is worth running for small tasks. Otherwise, I would stick with a cloud hosted model.

blackoil•5mo ago
That should remain true for foreseeable future. A 30b model can't beat 300b. Running 300b model locally is prohibitively expensive. By time it would be feasible cloud will also move to 10x larger model.
dcreater•5mo ago
Please share the results
apitman•5mo ago
At 4 bit quantization the weights only take half the RAM. You need a good chunk for context as well, but in my limited testing Qwen3-30B rand well on a single RTX 3090 (24GB VRAM).
wendythehacker•5mo ago
Cline seems to be having some security vulnerabilities that aren't addressed, e.g. https://embracethered.com/blog/posts/2025/cline-vulnerable-t...

Begs the question of long-term support, etc...

hiatus•5mo ago
This person keeps banging the drum of agents running on untrusted inputs doing unexpected things. The proof of concept doesn't prove anything and doesn't even have working code. It's not clear why this is classed as a markdown rendering bug when it appears cline is calling out to a remote server with the contents of an env file as parameters in a url.

edit: are you the author? You seem to post a lot from that blog and the blog author's other accounts.

dzikibaz•5mo ago
Open-weights models are catching up and are now viable for many tasks.

Keep in mind that closed, proprietary models:

1) Use your data internally for training, analytics, and more - because "the data is the moat"

2) Are out of your control - one day something might work, another day it might fail because of a model update, a new "internal" system prompt, or a new guardrail that just simply blocks your task

4) Are built on the "biggest intellectual property theft" of this century, so they should be open and free ;-)

nurettin•5mo ago
I tried qwen code yesterday. I don't recommend it for code editing unless you've committed your code. It destroyed a lot of files in just 10 minutes.
dcreater•5mo ago
Why do i feel like there's a plug for LMStudio baked in here