frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•1m ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•5m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•21m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•27m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•27m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•30m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•33m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•43m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•43m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•48m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•52m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•53m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•56m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•56m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•59m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•2h ago•1 comments
Open in hackernews

Show HN: Replacing my OS process scheduler with an LLM

https://github.com/mprajyothreddy/brainkernel
77•ImPrajyoth•1mo ago

Comments

ImPrajyoth•1mo ago
OP here. this is a cursed project lol, but i wanted to see: What happens if you replace the OS scheduler with an LLM?

With Groq speed (Llama 3 @ 800t/s), inference is finally fast enough to be in the system loop.

i built this TUI to monitor my process tree. instead of just showing CPU %, it checks the context (parent process, disk I/O) to decide if a process is compiling code or bloatware. It roasts, throttles, or kills based on that.

Its my experiment in "Intelligent Kernels" how they would be. i used Delta Caching to keep overhead low.

p_ing•1mo ago
You can't replace the NTOS scheduler. This is more of an automated (?) process manager.
ImPrajyoth•1mo ago
you are technically right (the best kind of right). i am running in userspace, so i cant replace the actual thread scheduling logic in Ring 0 without writing a driver and BSODing my machine.

think of this more as a High-Level Governor. The NTOS scheduler decides which thread runs next, but this LLM decides if that process deserves to exist at all.

basically; NTOS tries to be fair to every process. BrainKernel overrides that fairness with judgment. if i suspend a process, i have effectively vetoed the scheduler.

p_ing•1mo ago
> NTOS tries to be fair to every process

This is a super simplification of the NTOS scheduler. It's not that dumb!

> if i suspend a process, i have effectively vetoed the scheduler.

I mean, I suppose? It's the NTOS scheduler doing the suspension. It's like changing the priority level -- sure, you can do it, but it's generally to your detriment outside of corner cases.

nijave•1mo ago
I do wonder how painfully slow the computer would be if you actually did replace the in-kernel scheduler with an LLM...
devmor•1mo ago
This is a pretty funny project, you've outsourced the neurotic developers that keep their task manager open and kill off processes they don't like.

I wouldn't call it replacing the scheduler though - more that you've made a scheduler manager.

DougN7•1mo ago
I resemble that comment!

But seriously, it does really bug me on principle that DropBox should use over half a GB simply because it uses Chromium, even when nothing is visible.

SanjayMehta•1mo ago
Maestral is a cross platform implementation of the Dropbox client API which I use on low end Linux machines.
DougN7•1mo ago
Thanks for the tip - I’ll take a look
Xmd5a•1mo ago
For me it's LSP servers taking 2 gigs of RAM. With Antigravity, Google managed to go beyond this, it is totally unusable for me (but other VScode clones work fine, apart from the 2 Go LSP servers).
ImPrajyoth•1mo ago
haha exactly. i realized i spent too much time staring at htop wondering what is this process?, so i decided to automate my own anxiety.

Scheduler Manager is definitely the more accurate term. Im just the middleman between the chaos and the kernel.

QuantumNomad_•1mo ago
Now we need processes to gain awareness of the process manager and integrate an LLM into each process to argue with the process manager why it should let them live.
ImPrajyoth•1mo ago
Imagine Chrome.exe pleading its case: 'Please, I need 4GB of RAM, the user might revisit that tab from 3 hours ago!'

while BrainKernel replies: 'Objection overruled. You have 5 seconds to wrap up before SIGKILL.'

I might actually have to build a 'Process Defense Attorney' agent now. The logs would be hilarious.

moffkalast•1mo ago
Assistant to the scheduler manager
runelk•1mo ago
Assistant scheduler manager
2001zhaozhao•1mo ago
It really is cursed to be spending hundreds of watts of power in a datacenter somewhere to make a laptop run slightly faster.
ImPrajyoth•1mo ago
oh absolutely. burning a coal plant to decide if i should close discord is peak 2025 energy. strictly speaking, using the local model (Ollama) is 'free' in terms of watts since my laptop is on anyway, but yeah, if the inefficiency is the art, I'm the artist.
bdhcuidbebe•1mo ago
> using the local model (Ollama) is 'free' in terms of watts since my laptop is on anyway

Now that’s a cursed take on power efficency

ImPrajyoth•1mo ago
efficiency is just a mindset. if i save 3 seconds of my own attention by burning 300 watts of gpu, the math works out in my favor!
abeyer•1mo ago
"works out in my favor" is a pretty poor metric.

If I burn a billion tons of someone else's coal to make myself a paperclip (and don't have to breathe the outputs) it works out in my favor too.

fragmede•1mo ago
Running ollama to compute inference uses energy that wouldn't have been used if you weren't running ollama. There's no free lunch here.
hebejebelus•1mo ago
An interesting thought experiment - a fully local, off-grid, off-network LLM device. Solar or wind or what have you. I suppose the Mac Studio route is a good option here, I think Apple make the most energy efficient high-memory options. Back of the napkin indicates it’s possible, just a high up front cost. Interesting to imagine a somewhat catastrophe-resilient LLM device…
ImPrajyoth•1mo ago
That is the endgame.

I think we are moving toward a bilayered compute model: The Cloud: For massive reasoning.

The Local Edge: A small, resilient model that lives on-device and handles the OS loop, privacy, and immediate context.

BrainKernel is my attempt to prototype that Local Edge layer. Its messy right now, but I think the OS of 2030 will definitely have a local LLM baked into the kernel.

hebejebelus•1mo ago
Well, on my Macbook, some of that already exists. In the Shortcuts app you can use the "Use Model" action which offers to run an LLM on apple's cloud, on-device, or other external service (eg ChatGPT). I use this myself already for several actions, like reading emails from my tennis club to put events in my calendar automatically.

Whether or not we'll see it lower down in the system I'm not sure. Honestly I'm not certain of the utility of an autonomous LLM loop in many or most parts of an OS, where (in general) systems have more value the more deterministic they are, but in the user space, who can say.

In any case, I certainly went down a fun rabbit hole thinking about a mesh network of LLM nodes and thin clients in a post-collapse world. In that scenario, I wonder if the utility of LLMs is really worth the complexity versus a kindle-like device with a copy of wikipedia...

evilduck•1mo ago
Macs would be the most power efficient with faster memory but an AI Max 395+ based system would probably be the most cost efficient right now. A Framework Desktop with 128GB of shared RAM only pulls 400W (and could be underclocked) and is cheaper by enough that you could buy it plus 400W of solar panels and a decently large battery for less than a Mac Studio with 128GB of RAM. Unfortunately the power efficiency win is more expensive than just buying more power generation and storage ability.
hebejebelus•1mo ago
I suppose in terms of catastrophe resilience repairability would be important, although how do you repair a broken GPU in any case. Probably cold backup machines is probably the more feasible way to extend lifetimes.

And yeah - I was thinking that actually power efficiency isn’t really a massive deal if you have some kind of thin client setup. The LLM nodes can be at millraces or some other power dense locations, and then the clients are basically 5W displays with an RF transceiver and a keyboard…

An entertaining thought experiment :)

nubinetwork•1mo ago
An entire datacenter on the other hand, might be appealing to spot things you wouldn't otherwise see in a sea of logs and graphs.
pasisu•1mo ago
Ok
gillesjacobs•1mo ago
You're underselling this as a process manager, it could also be a productivity tool with some prompt changes; Determine procrastination apps: games, non-professional chat, video streaming and kill it.
ImPrajyoth•1mo ago
That is actually a brilliant pivot.

A 'Focus Mode' that doesn't just block URLs but literally murders the process if I open Steam or Civilization VI.

I could probably add a --mode strict flag that swaps the system prompt to be a ruthless productivity coach. 'Oh, you opened Discord? Roast and Kill.'

Thanks for the idea mate!

tryauuum•1mo ago
I was looking for a project which would run an LLM-powered character (like Clippy), who would periodically screenshot my screen and comment on my life choices.

Sadly the only project I've found was for windows OS

nialv7•1mo ago
Task manager, not scheduler.
lorenzohess•1mo ago
Please add Roulette mode where a random process is killed every so often
solarkraft•1mo ago
You did not replace the OS process scheduler with an LLM.
1970-01-01•1mo ago
This is the one place that I would want Copilot running. It's giving me ideas :)
Someone•1mo ago
If it doesn’t find a process that needs roasting or killing for a while, will it see itself as bloatware and commit suicide?
brcmthrowaway•1mo ago
Why not branch prediction with LLM?
effnorwood•1mo ago
Please name it juggler