frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What breaks in cross-border healthcare coordination?

1•abhay1633•12s ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•2m ago•0 comments

Show HN: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•3m ago•0 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•4m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•4m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•5m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•5m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•7m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
5•derriz•7m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•7m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•7m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•8m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•11m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•12m ago•0 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•13m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•14m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•16m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•18m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•18m ago•0 comments

Jeremy Wade's Mighty Rivers

https://www.youtube.com/playlist?list=PLyOro6vMGsP_xkW6FXxsaeHUkD5e-9AUa
1•saikatsg•18m ago•0 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
2•sam256•21m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•21m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•22m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

2•amichail•24m ago•1 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
3•kositheastro•27m ago•1 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•27m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•30m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•30m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•31m ago•1 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•31m ago•0 comments
Open in hackernews

How to Migrate from OpenAI to Cerebrium for Cost-Predictable AI Inference

https://ritza.co/articles/migrate-from-openai-to-cerebrium-with-vllm-for-predictable-inference/
48•sixhobbits•6mo ago

Comments

amelius•6mo ago
How to move from one service that is out of your control to another service that is out of your control.
anonymousDan•6mo ago
I don't understand - what do they mean when they say you can run things on your own infrastructure then?
amelius•6mo ago
They say "serverless infrastructure", which is something else.
dist-epoch•6mo ago
Your own infrastructure in the same sense as your own AWS EC2 machines.
klabb3•6mo ago
> your own AWS EC2 machines

Not disagreeing, but this is quite an expression.

Incipient•6mo ago
Having the ABILITY to move seamlessly and without significant cost is absolutely critical.

It gives you flexibility if the provider isn't keeping pace with the market and it prevents the provider from jacking prices relative to its competitors.

Vendor lockin is awful. Hypothetically, imagine how stuffed you'd be if your core virtualisation provider jacked prices 500%! You'd be really hurting.

...ohwait.

kristianc•6mo ago
You're not really locked in in any meaningful way currently, you just switch the API you're using. Rather like is being demonstrated here.
iamlintaoz•6mo ago
Why? Honestly, there are already tons of Model-as-a-Service (MaaS) platforms out there—big names like AWS Bedrock and Azure AI Foundry, plus a bunch of startups like Groq and fireflies.ai. I’m just not seeing what makes Cerebrium stand out from the crowd.
benterix•6mo ago
Well, they are announcing their $8.5m seed round and hope to attract the maximum number of users by giving away $30 in credits.
tomschwiha•6mo ago
The "not optimized" self hosted deployment is 3x slower and costs 34x the price using the cheapest GPU / a weak model.

I don't see the point in self hosting unless you deploy a gpu in your own datacenter where you really have control. But that costs usually more for most use cases.

Incipient•6mo ago
Is there actually some scale magic that allows the 34x cost saving (over 100x when you include performance), or is it just insane investment allowing these companies to heavily subsidise cost to gain market share?
tomschwiha•6mo ago
Calculating without energy costs: The A10 Gpu itself costs 3200$. With a 3 year usage that is 0,002$ per minute. From the blog post the cost per minute is charged at 0,02$, so a premium of 10x. So with energy if you can load the GPU at minimum 15-20% self hosted becomes cheaper. But you need to take care of your own infrastructure.

With larger purchases the GPU prices also drop so that is the scaling logic.

ToucanLoucan•6mo ago
> I don't see the point in self hosting unless you deploy a gpu in your own datacenter where you really have control. But that costs usually more for most use cases.

Not wanting to send tons of private data to a company who's foundation is exploiting data it didn't have permission to use?

dabedee•6mo ago
This isn't really about cost savings, it's about control. Self-hosting makes sense when you need data privacy, custom fine-tuning, specialized models, or predictable costs at scale. For most use cases requiring GPT-4o-mini quality, you'll pay more for self-hosting until you reach significant volume.
ivape•6mo ago
I’m trying to figure out the cost predictability angle here. It seems like they still have a cost per input/output tokens, so how is it any different? Also, do I have to assume one gpu instance will scale automatically as traffic goes up?

LLM pricing is pretty intense if you’re using anything beyond a 8b model, at least that’s what I’m noticing on OpenRouter. 3-4 calls can approach eating up a $1 with bigger models, and certainly on frontier ones.

jameswhitford•6mo ago
Serverless setups (like Cerebrium) charge per second the model is running, its not token based.
ivape•6mo ago
Ah you’re right, misread the OpenAI/cerbrium pricing config variables.
BoorishBears•6mo ago
You're still paying more than the GPU typically costs on an hourly basis to take advantage of their per-second billing... and if you don't have enough utilization to saturate an hourly rental then your users are going to be constantly running into cold starts which tend to be brutal for larger models.

Their A100 80GB is going more than what I pay to rent H100s: if you really want to save money, getting the cheapest hourly rentals possible is the only way you have any hope of saving money vs major providers.

I think people vastly underestimate how much companies like OpenAI can do with inference efficiency between large nodes, large batch sizes, and hyper optimized inference stacks.

ivape•6mo ago
I'll echo one of my original concerns, which is how is this supposed to scale? Am I responsible for that?
BoorishBears•6mo ago
How is what supposed to scale?

If you mean the serverless GPU offering, typically you set a cap for how many requests a single instance is meant to serve. Past that cap they'll spin up more instances.

But if you mean rentals, scaling is on you. With LLM inference there's a regime where the model responses will slow down on a per-user basis while overall throughput goes up, but eventually you'll run out of headroom and need more servers.

Another reason why generally speaking it's hard to compete with major providers on cost effectiveness.

ivape•6mo ago
Past that cap they'll spin up more instances.

Thank you, this is what I wanted to know.

typically you set a cap for how many requests a single instance is meant to serve

If this is on us then we'd have to make sure whatever caps we set beat api providers. I don't know how easy that cap is to figure out.

BoorishBears•6mo ago
If you're making the effort-cost tradeoff like this, you typically choose a model, test a few inference stacks with prompts that are representative lengths for your use case, then benchmark.

To benchmark you identify a maximum time to first token your users will accept, and minimum tokens per second they'll accept, then test how many concurrent requests you can handle before you exceed either limit.

I can tell you, in my case the only reason why the pricing is somewhat competitive for self-hosting is that I'm aggressively seeking cheap rentals, have a use-case that requires very long prompts with few cache hits, and I've used extensive (and expensive) post-training to deploy smaller models than I'd otherwise need.

benterix•6mo ago
To people from Cerebrium: why should I use your services when Runpod is cheaper? I mean, why did you decide to set your prices higher than an established company with significant user base?
Sanzig•6mo ago
(Not affiliated with Cerebrium, just did a bit of looking into this a little while back).

Runpod outsources much of their infrastructure to small players that own GPUs. They have recently added some requirements on security and reliability (eg: some level of security audit such as SOC 2, has to be hosted in a real DC, has to be in a locked rack), but fundamentally they are leaning on small shops that slap some GPUs in a server at a colocation facility. This personally would make me nervous about any sensitive workloads.

My impression is that Cerebrium either owns their own GPU servers or they're outsourcing to one of the big players. They certainly don't have the "partner program" advertised on their site like Runpod does.

za_mike157•6mo ago
Hey! Founder of Cerebrium here.

- Runpod is one of the cheapest but it comes at the price of reliability (critical for businesses) - We have more performant cold start performance with something special launching soon here - Iterating on your application using CPUs/GPUs in the cloud takes just 2–10 seconds, compared to several minutes with Runpod due to Docker push/pull. - Allow you to deploy in multiple regions globally for lower latency and data residency compliance - We provide a lot of software abstractions (fire and forget jobs, websockets, batching, etc) where as Runpod just deploys your docker image. - SOC 2 and GDPR compliant

With that all being said - we are working on optimisations to bring down pricing

benterix•6mo ago
Thanks, makes sense.
Incipient•6mo ago
Is this article just saying openai is orders of magnitude cheaper than cerebrium?
jameswhitford•6mo ago
It's a demo project using the free tier hardware from Cerebrum, demonstrating how to migrate with a few lines of code from OpenAI. The cost is never going to beat OpenAI on an A10, there are more powerful options available.
gordianlabs•6mo ago
Do you forecast costs or just provide more visibility?