frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•1m ago•0 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•4m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•7m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•8m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•8m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•9m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•9m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•10m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•11m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•14m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•18m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•18m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•23m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•24m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•25m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•28m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•30m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•30m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•30m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•31m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•32m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•34m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•37m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•39m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•39m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•39m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•42m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•45m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•48m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
4•Tehnix•48m ago•1 comments
Open in hackernews

Show HN: A private, flat monthly subscription for open-source LLMs

https://synthetic.new/newsletter/entries/subscriptions
31•reissbaker•5mo ago
Hey HN! We've run our privacy-focused open-source inference company for a while now, and we're launching a flat monthly subscription similar to Anthropic's. It should work with Cline, Roo, KiloCode, Aider, etc — any OpenAI-compatible API client should do. The rate limits at every tier are higher than the Claude rate limits, so even if you prefer using Claude it can be a helpful backup for when you're rate limited, for a pretty low price. Let me know if you have any feedback!

Comments

logicprog•5mo ago
I was literally just wishing there was something like this, this is perfect! Do you do prompt caching?
reissbaker•5mo ago
Aw thanks! We don't currently, but from a cost perspective as a user it shouldn't matter much since it's all bundled into the same subscription (we rate-limit by requests, not by tokens — our request rate limits are set to "higher than the amount of messages per hour that Claude Code promises", haha). We might at some point just to save GPUs though!
logicprog•5mo ago
Yeah I wasn't worried so much about costs to me, as sustainability of your own prices — don't want to run into a "we're lowering quotas" situation like CC did :P
reissbaker•5mo ago
Lol fair! I think we're safe for now; our most popular model (and my personal favorite coding model) is GLM-4.5, which fits on a ~relatively small node compared to the rumored sizes of Anthropic's models. We can throw a lot of tokens at it before running into issues — it's kind of nice to launch without prompt caching, since it means if we're flying too close to the sun on tokens we still have some pretty large levers left to pull on the infra side before needing to do anything drastic with rate limits.
logicprog•5mo ago
> I think we're safe for now; our most popular model (and my personal favorite coding model) is GLM-4.5,

That's funny, that's also my favorite coding model as well!

> the rumored sizes of Anthropic's models

Yeah. I've long had a hypothesis that their models are, like, average sized for a SOTA model, but fully dense, like that old llama 3.1 405b model, and that's why their per token inference costs are insane compared to the competition.

> it's kind of nice to launch without prompt caching, since it means if we're flying too close to the sun on tokens we still have some pretty large levers left to pull on the infra side before needing to do anything drastic with rate limits.

That makes sense.

I'm poor as dirt, and my job actually forbids AI code in the main codebase, so I can't justify even a $20 per month prescription right now (especially when, for experimenting with agentic coding, qwen code is currently free (if shitty)) but when or if it becomes financially responsible, you will be at the very top of my list.

reissbaker•5mo ago
<3 thank you!
rationably•5mo ago
Do you plan to offer a high-quality FIM models in the bundle? Would be handy to perform autocompletion locally, say via the Qwen3-coder.
reissbaker•5mo ago
Interesting! Very open to the idea. What open-source fill-in-the-middle models are good right now? I've stayed on top of the open source primary coding LLMs, but haven't been following along for the open-source FIM ones.
rationably•5mo ago
New Qwen3 or older Qwen2.5 in larger sizes would be great.
ykjs•5mo ago
Can this be provided as an API?
reissbaker•5mo ago
Yes! We have a standard OpenAI-compatible API, and we don't restrict subscriptions from using it (unlike Anthropic, where API keys are billed differently unless you're using Claude Code directly, or in a tool that wraps Claude Code).
paool•5mo ago
how would I point to your API to use in a Mastra ai agent?
reissbaker•5mo ago
I'm not deeply familiar with Mastra, but reading their docs, it looks like they use the Vercel AI SDK — which is great, since Vercel's AI SDK can work with any OpenAI-compatible API, including ours. All you need to do is set a custom API base URL; in our case, that's https://api.synthetic.new/v1

Then just plug in your Synthetic API key, and you should be able to use any supported model. For example, to use GLM-4.5, you'd pass the following model string: "hf:zai-org/GLM-4.5"

The AI SDK docs are here for using custom base URLs: https://ai-sdk.dev/docs/ai-sdk-core/provider-management

You can also join our Discord if you need help! https://synthetic.new/discord should redirect you to our Discord server :)

cofob_•5mo ago
Cool!

How are messages counted? For example, in Cursor, one request is 25 tool calls. Does 100 messages in a subscription here mean 100 tool calls or 100 requests each with 25 tool calls?

When it comes to privacy, there are also some questions. It says that requests can only be used for debugging purposes, but it later mentions a license for using the requests to improve the platform, which can mean that you can use it not only for debugging purposes.

reissbaker•5mo ago
Oh to be clear, the API prompts/completions can't be stored longer than 14 days or used for anything other than debugging — the data retention section takes priority over everything else. I believe the other requests mentioned refer to general web traffic requests and web UI data. Thank you for the great question!

For requests, it depends on the agent framework to a certain extent. We just count API requests. For frameworks that support parallel tool calls, assuming they're using the standard OpenAI parallel tool call API, the entire parallel batch only counts as one request — since it only generated a single API request, and we just count API requests. I don't know exactly how Cursor structures it but I'd be surprised if they were making 100 API requests per message — I assume they're using the normal parallel tool call API to send all tools in a single batch, which equates to only taking 1 request of your quota in the rate limit.

jml78•5mo ago
I currently use Cerebras for qwen3. One of the things I like is its speed(the TPM limit is rough). I am curious, how fast is qwen3 on your platform and what quantization are you running for your models?
reissbaker•5mo ago
I'm on plane wifi right now but I'll benchmark later today — when I benchmarked GLM-4.5, I could get 150-200tps in the Bay Area, California. Qwen3 is probably somewhat lower TBH. We have an open-source coding agent that includes a TPS benchmarker that works with any OpenAI compatible API, including ours: https://github.com/synthetic-lab/octofriend

To run the TPS benchmark, just run:

    octo bench tps
All it does is ask the model to write a long story without making tool calls (although we do send the tool definitions over, to accurately benchmark differences in tool call serialization/parsing). It usually consumes a little over 1k tokens so it's fairly cheap to run against different usage-based APIs (and only consumes a single request for subscription APIs that rate limit by request).

Edit: forgot to add — for Qwen3 everything should be running in FP8.

reissbaker•5mo ago
Just tried benchmarking from Mexico City, where I'm at for a wedding — looks like 130tps for Qwen3 Coder 480B here.
whs•5mo ago
I signed up, feels like this is something that should've existed long ago.

Your privacy policy isn't good for a privacy focused provider though. You shouldn't have the rights to use my personal information. The use of Google Tag Manager also not inspire confidence, especially in LLM pages where you might "accidentally" install a user monitoring script and the prompts get logged. I'd suggest looking at how Kagi do the marketing to privacy-conscious customers.

reissbaker•5mo ago
This is good feedback thank you. We use Google only to track ad conversions, and use a cookie page to prevent them from even running until people give consent on the cookie form, but I agree it's not ideal and I've kind of hated having it. I'll see what I can do about the privacy policy — thank you for the reference to Kagi!
lelele•5mo ago
I've taken a look. Interesting, but you don't specify which payment methods you accept, and your website lacks a contact form for asking that or anything else.