frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I made a weekly editorial on what HN was feeling and building

https://menggg.me/vibes/week-15
1•menggg•4m ago•0 comments

What Is Purisaki Berberine Patches and How It Supports Weight Loss? [pdf]

https://www.fd.ulisboa.pt/wp-content/uploads/formidable/2/PurisakiBerberinePatchesReal1-d3baj.pdf
1•JasperFarmer•4m ago•0 comments

Nvidia Ising – Open AI Models for Quantum Computing

https://www.nvidia.com/en-us/solutions/quantum-computing/ising/
1•fuglede_•5m ago•0 comments

Agent SLOs: Grounding autonomous agents in metrics that matter

https://blog.firetiger.com/agent-slos-grounding-autonomous-agents-in-metrics-that-matter/
1•matsur•7m ago•0 comments

Shouldn't we have an agent.lock file for AI coding agents?

https://srajangupta.substack.com/p/where-is-my-agentlock-file
1•srajan_gupta•7m ago•0 comments

DFlash: Block Diffusion for Flash Speculative Decoding

https://z-lab.ai/projects/dflash/
1•oldfuture•8m ago•0 comments

Teaching AI Agents to Speak Hardware

https://quadric.ai/blog/mcp-ai-coding-assistant
1•tkocmathla•9m ago•0 comments

Delete ChatGPT Atlas Spyware

1•niagznculiau•9m ago•0 comments

Nokia Automated Indoor Design and Validation

https://www.nokia.com/blog/revolutionizing-in-building-connectivity-design-nokia-automated-indoor...
1•salkahfi•9m ago•0 comments

Critical Atlantic current significantly more likely to collapse than thought

https://www.theguardian.com/environment/2026/apr/15/critical-atlantic-current-significantly-more-...
1•tempestn•11m ago•0 comments

Who Tried Hermes Agent?

https://github.com/aipoch/medical-research-skills
1•The_resa•15m ago•1 comments

Allbirds shoe company moving to AI infra is the top

https://www.theregister.com/2026/04/15/allbirds_ai_longislandicedtea_blockchain_lolol/
1•abdelhousni•21m ago•1 comments

Radical Pie: Professional Equation Editor for Windows 10/11

https://radicalpie.com/
1•teleforce•22m ago•0 comments

Ben Lerner's Big Feelings

https://www.vulture.com/article/ben-lerner-transcription-interview.html
1•prismatic•23m ago•0 comments

Cirrus CI is shutting down: upgrade to a scalable, AI-ready alternative

https://circleci.com/blog/cirrus-ci-alternative/
1•levlaz•23m ago•0 comments

Agtop: Btop but for Your Agents

https://github.com/ldegio/agtop
1•handfuloflight•24m ago•0 comments

Germany suspends military approval for long stays abroad for men under 45

https://www.bbc.com/news/articles/ckgx103wkl1o
1•timokoesters•26m ago•1 comments

Period property drama as Shakespeare's London house discovered

https://www.thetimes.com/uk/history/article/shakespeare-london-house-discovered-blackfriars-nc7jz...
1•petethomas•27m ago•0 comments

Kobo Offline

https://kobo-offline.virgulilla.com
1•Curiositry•30m ago•0 comments

I parsed the Voynich Manuscript as a 17x73 deterministic data matrix

https://zenodo.org/records/19574356
1•Oi_DataArch•32m ago•0 comments

Free Space Optical Link Utilizing a Modulated Retro-Reflector [pdf]

https://ntrs.nasa.gov/api/citations/20200000354/downloads/20200000354.pdf
1•oldfuture•33m ago•0 comments

Tell HN: Qwen Free Tier Is Discontinued

1•dhruv_ahuja•34m ago•0 comments

Discord CLI Client for Scripting

https://github.com/mrarfarf/discord-cli
2•mrarfarf•35m ago•0 comments

John Earnest, array language audio+graphics hacker

https://alexalejandre.com/programming/interview-with-john-earnest/
3•vi_sextus_vi•40m ago•1 comments

Ask HN: SeedLegals Partnerships in London, worth it?

2•pain_perdu•41m ago•0 comments

Slowburn: Looking Through AMD Platform Configuration Blobs Infrastructure

https://swarm.ptsecurity.com/slowburn-looking-through-amd-platform-configuration-blobs-infrastruc...
1•latchkey•43m ago•0 comments

Corner-Case RCU Implementations

https://people.kernel.org/paulmck/stupid-rcu-tricks-corner-case-rcu-implementations
1•mfrw•45m ago•0 comments

Why Anthropic and OpenAI are locking up their latest models

https://www.economist.com/business/2026/04/15/why-anthropic-and-openai-are-locking-up-their-lates...
2•petethomas•45m ago•0 comments

Routines in Claude Code

https://claude.com/blog/introducing-routines-in-claude-code
2•taubek•47m ago•0 comments

Rippl – Performance marketing inside WhatsApp, Telegram, and Discord communities

https://apps.apple.com/gb/app/rippl-by-mrvl/id6761179465
2•SupaMRVL•50m ago•0 comments
Open in hackernews

Darkbloom – Private inference on idle Macs

https://darkbloom.dev
87•twapi•1h ago

Comments

DeathArrow•1h ago
Why only Macs? If we think of all PCs and mobile phones running idle, the potential is much larger.
stryakr•1h ago
simple first target, PCs have more variability
btown•1h ago
From the paper: https://github.com/Layr-Labs/d-inference/blob/master/papers/...

> Apple’s attestation servers will only generate the FreshnessCode for a genuine device that checks in via APNs. A software-only adversary cannot forge the MDA certificate chain (Assumption 3). Com- bined with SIP enforcement (preventing binary replace- ment) and Secure Boot (preventing bootloader tampering), this provides strong evidence that the signing key resides in genuine Apple hardware.

nl•1h ago
They use the Apple TEE which they claim also protects GPU memory (I wasn't aware of this).

NVidia data center GPUs have a similar path, but not their consumer ones. Not sure about the NVidia Spark.

It's possible AMD Strix Halo can do this, but unlikely for any other PC based GPU environments.

MrDrMcCoy•55m ago
Epyc has that VM encrypted memory thing, which comes pretty close. It does raise an interesting question, though: would a PCIe card passed through to a VM be able to DMA access the memory of neighboring devices?
rvz•1h ago
Should have called it “Inferanet” with this idea.

Away this looks like a great idea and might have a chance at solving the economic issue with running nodes for cheap inference and getting paid for it.

nl•1h ago
They use the TEE to check that the model and code is untampered with. That's a good, valid approach and should work (I've done similar things on AWS with their TEE)

The key question here is how they avoid the outside computer being able to view the memory of the internal process:

> An in-process inference design that embeds the in- ference engine directly in a hardened process, elimi- nating all inter-process communication channels that could be observed, with optional hypervisor mem- ory isolation that extends protection from software- enforced to hardware-enforced via ARM Stage 2 page tables at zero performance cost.[1]

I was under the impression this wasn't possible if you are using the GPU. I could be misled on this though.

[1] https://github.com/Layr-Labs/d-inference/blob/master/papers/...

flockonus•1h ago
While they do make this argument, realistically anyone sending their prompt/data to an external server should assume there will be some level of retention.

And more so in particular, anyone using Darkbloom with commercial intents should only really send non-sensitive data (no tokens, customer data, ...) I'd say only classification tasks, imagine generation, etc.

ramoz•1h ago
Macs do not have an accessible hardware TEE.

Macs have secure enclaves.

nl•36m ago
Good point!

But they argue that:

> PT_DENY_ATTACH (ptrace constant 31): Invoked at process startup before any sensitive data is loaded. Instructs the macOS kernel to permanently deny all ptracerequests against this process, including from root. This blocks lldb, dtrace, and Instruments.

> Hardened Runtime: The binary is code-signed with hardened runtime options and explicitly without the com.apple.security.get-task-allow entitlement. The kernel denies task_for_pid() and mach_vm_read()from any external process.

> System Integrity Protection (SIP): Enforces both of the above at the kernel level. With SIP enabled, root cannot circumvent Hardened Runtime protections, load unsigned kernel extensions, or modify protected sys- tem binaries. Section 5.1 proves that SIP, once verified, is immutable for the process lifetime.

gives them memory protection.

To me that is surprising.

ramoz•31m ago
I'm not arguing anything. This is how it works. There is no but.

Protection here is conditional, best-effort. There are no true guarantees, nor actual verifiability.

dinobones•14m ago
Couldn't someone just uhh... patch their macOS/kernel, mock these things out, then behold, you can now access all the data?

If it's not running fully end to end in some secure enclave, then it's always just a best effort thing. Good marketing though.

kennywinker•1h ago
I have a hard time believing their numbers. If you can pay off a mac mini in 2-4 months, and make $1-2k profit every month after that, why wouldn’t their business model just be buying mac minis?
foota•1h ago
Capital and availability?
kennywinker•1h ago
I guess if it only works at scale capital is maybe the answer. Like enough cash to buy 5 or 10 or even 100 minis seem doable - but if the idea only works well when you have 10,000 running - that makes some sense.
gleenn•1h ago
Power and racking are difficult and expensive?
kennywinker•1h ago
How difficult? Is running 1000 minis worth $1,000,000/month of effort? I feel like it is.
runako•35m ago
There are many people who do not have ready access to a million dollars to purchase said Mac minis, much less the operating capital to rack & operate them.

Very smart play to build a platform, get scale, and prove out the software. Then either add a small network fee (this could be on money movement on/off platform), add a higher tier of service for money, and/or just use the proof points to go get access to capital and become an operator in your own pool.

nxpnsv•13m ago
If those numbers are true, they could tart with one Mac and can double every few months. But, I guess there are also many people who do not have ready access to whatever a Mac mini costs either...
ffsm8•33m ago
And at that scale (1k) it ain't even that hard, a single room could be enough to hazardly drop them on shelves with a big fan to draw out the heat
chaoz_•1h ago
Solid q. I think the part of it is that it’s really easy to attract some “mass” (capital) of users, as there are definitely quite a few of idle Macs in the world.

Non-VC play (not required until you can raise on your own terms!) and clear differentiation.

If you want to go full-business-evaluation, I would be more worried about someone else implementing same thing with more commission (imo 95% and first to market is good enough).

thih9•11m ago
> These are estimates only. We do not guarantee any specific utilization or earnings. Actual earnings depend on network demand, model popularity, your provider reputation score, and how many other providers are serving the same model.

Others are reporting low demand, eg.: https://news.ycombinator.com/item?id=47789171

znnajdla•6m ago
The numbers are obviously high, because if this takes off then the price for inference will also drop. But I still think it’s a solid economic model that benefits low income countries the most. In Ukraine, for example, I know people who live on $200/month. A couple Mac Minis could feed a family in many places.

As a business owner, I can think of multiple reasons why a decentralized network is better for me as a business than relying on a hyperscaler inference provider. 1. No dependency on a BigTech provider who can cut me off or change prices at any time. I’m willing to pay a premium for that. 2. I get a residential IP proxy network built-in. AI scrapers pay big money for that. 3. No censorship. 4. Lower latency if inference nodes are located close to me.

chaoz_•1h ago
That solution actually makes great sense. So Apple won in some strange way again?

Guess there are limitations on size of the models, but if top-tier models will getting democratized I don’t see a reason not to use this API. The only thing that comes to me is data privacy concerns.

I think batch-evals for non-sensitive data has great PMF here.

rvz•1h ago
Yes. They never needed to participate in the AI race to zero.

Because they were already at the finish line with Apple Silicon.

> I don’t see a reason not to use this API. The only thing that comes to me is data privacy concerns.

The whole inference is end-to-end encrypted so none of the nodes can see the prompts or the messages.

chaoz_•1h ago
Fun question: can some (part of it) be a crypto token that I can buy? :))

That would finally be a crypto thing which is backed by value I believe in.

bentt•1h ago
I thought this was Apple’s plan all along. How is this not already their thing?
TuringNYC•1h ago
I'd love a way to do this locally -- pool all the PCs in our own office for in-office pools of compute. Any suggestions from anyone? We currently run ollama but manually manage the pools
damezumari•1h ago
https://github.com/exo-explore/exo
pants2•1h ago
Cool idea. Just some back-of-the-envelope math here (not trusting what's on their site):

My M5 Pro can generate 130 tok/s (4 streams) on Gemma 4 26B. Darkbloom's pricing is $0.20 per Mtok output.

That's about $2.24/day or $67/mo revenue if it's fully utilized 24/7.

Now assuming 50W sustained load, that's about 36 kWh/mo, at ~$.25/kWh approx. $9/mo in costs.

Could be good for lunch money every once in a while! Around $700/yr.

MrDrMcCoy•1h ago
Don't forget to factor in cooling costs.
pants2•1h ago
Or saved heating costs in the winter!
todotask2•1h ago
OpenAI has only about 5% paying customers, how does it generate revenue?

I don’t think this is a sustainable business model. For example, Cubbit tried to build decentralised storage, but I backed out because better alternatives now exist, and hardware continues to improve and become cheaper over time.

Your electricity and ownership are going to get lower return and does not actually requce CO2.

chaoz_•1h ago
Genuinely curious, is there any way to estimate amortization of Mac?

I’d imagine 1 year of heavy usage would somehow affect its quality.

pants2•44m ago
Yeah, only way to get there is assuming they're not giving prompt caching discounts while my laptop is getting prompt caching benefits, with very many large prompts. So yes I am skeptical of their numbers.
xendo•59m ago
Any idea what makes for such a diff between your and theirs numbers? Batching? Or could they do a crazy prefix caching across all nodes to reduce the actual processing.
mavamaarten•42m ago
Well. Running your machine to do inference will utilize more than 50W sustained load, I'd say more than double that. Plus electricity is more expensive here (but granted, I do have solar panels). Plus don't forget to factor in that your hardware will age faster.

I'd say it's not worth it. But the idea is cool.

kennywinker•23m ago
Their estimate is based on significantly lower consumption when under load. E.g. 25W for an M4 Pro mac mini. I have no idea if that’s realistic - but the m4s are supposedly pretty efficient (https://www.jeffgeerling.com/blog/2024/m4-mac-minis-efficien...)
kennywinker•30m ago
Their example big earner models are FLUX.2 Klein 4B and FLUX.2 Klein 9B, which i imagine could generate a lot more tokens/s than a 26B model on your machine.

For Gemma 4 26B their math is:

single_tok/s = (307 GB/s / 4 GB) * 0.60 = 46.0 tok/s

batched_tok/s = 46.0 * 10 * 0.9 = 414.4 tok/s

tok/hr = 414.4 * 3600 = 1,492,020

revenue/hr = (1,492,020 / 1M) * $0.200000 = $0.2984

I have no idea if that is a good estimate of how much an M5 Pro can generate - but that’s what it says on their site.

They do a bit of a sneaky thing with power calculation: they subtract 12Ws of idle power, because they are assuming your machine is idling 24/7, so the only cost is the extra 18W they estimate you’ll use doing inference. Idk about you, but i do turn my machine off when i am not using it.

znnajdla•29m ago
Maybe lunch money for you, but there are people in some parts of the world who live on $200/month. Like Ukraine.
sethherr•21m ago
But they probably don’t have M5 MacBook pros idling
BingBingBap•1h ago
Generate images requested by randoms on the internet on your hardware.

What could possibly go wrong?

pants2•1h ago
You might not even know it as a user but the payment/distribution here is all built on crypto+stablecoins. This is a great use case for it.
rvz•45m ago
Good. Another great non-speculative use-case for crypto and stablecoins.
kennywinker•21m ago
Amazing! Let me see, doing the math r/n… carry the one, yup that makes the total number of non-speculative uses for crypto and stablecoin: 1

;P

ramoz•1h ago
Unfortunately, verifiable privacy is not physically possible on MacBooks of today. Don't let a nice presentation fool you.

Apple Silicon has a Secure Enclave, but not a public SGX/TDX/SEV-style enclave for arbitrary code, so these claims are about OS hardening, not verifiable confidential execution.

It would be nice if it were possible. There's a lot of cool innovations possible beyond privacy.

geon•38m ago
Every hardware key will be broken if there is enough incentive to do so. Their claims read like pure hubris.
dr_kiszonka•48m ago
"These are estimates only. We do not guarantee any specific utilization or earnings. Actual earnings depend on network demand, model popularity, your provider reputation score, and how many other providers are serving the same model.

When your Mac is idle (no inference requests), it consumes minimal power — you don't lose significant money waiting for requests. The electricity costs shown only apply during active inference.

Text models typically see the highest and most consistent demand. Image generation and transcription requests are bursty — high volume during peaks, quiet otherwise."

dcreater•46m ago
I cant buy credits - says page could not load
stuxnet79•35m ago
So basically ... Pied Piper.
tgma•12m ago
I installed this so you don't have to. It did feel a bit quirky and not super polished. Fails to download the image model. The audio/tts model fails to load.

In 15 minutes of serving Gemma, I got precisely zero actual inference requests, and a bunch of health checks and two attestations.

At the moment they don't have enough sustained demand to justify the earning estimates.

koliber•7m ago
Apple should build this, and start giving away free Macs subsidized by idle usage.
jboggan•5m ago
Is this named after the 2011 split album with Grimes and d'Eon?