frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•2m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•5m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•6m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•6m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•7m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•7m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•8m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•9m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•12m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•15m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•15m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•21m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•22m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•22m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•25m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•28m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•28m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•28m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•28m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•30m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•32m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•34m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•37m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•37m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•37m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•40m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•43m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•46m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
4•Tehnix•46m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•48m ago•1 comments
Open in hackernews

Ironwood, our latest TPU

https://blog.google/products/google-cloud/ironwood-google-tpu-things-to-know/
84•zdw•2mo ago

Comments

shrubble•2mo ago
Not much real data or news there.
bigyabai•2mo ago
> It’s designed for AI with AI

CUDA engineers, your job security has never felt more certain.

TrainedMonkey•2mo ago
Google having their own hardware for training and inference is newsworthy, but the link is pretty bad. Here is a much better source https://blog.google/products/google-cloud/ironwood-tpu-age-o...
bgwalter•2mo ago
So we will be getting wrong answers faster now.
ragequittah•2mo ago
I'll never understand this attitude. Recently I set up a full network with 5 computers, opnsense, xcp-ng and a few things like a pi, switch, AP, etc.

I was migrating from pfsense to Opnsense so I wasn't too familiar with some of the nitty gritty. Was migrating to xcp-ng 8.3 from 8.2 which has some major CLI differences. It was a pretty big migration that took me a full weekend.

OpenAI got things wrong (mostly because it was using old documentation - opnsense had just upgraded) maybe 8 times in the whole project and was able to quickly correct itself when I elaborated on the problem.

If I just had google this would've been a 2 week project easily. I'd have to drudge through extremely dry documentation that mostly doesn't apply to anything I'm doing. Would have to read a bunch of toxic threads demeaning users who don't know everything. Instead I had chatgpt 5 do all that for me and got to the exact same result with a tenth of the effort.

The AI is useless crowd truly makes me scratch my head.

oliwarner•2mo ago
> If I just had google this would've been a 2 week project easily.

But you'd know something new by the end of it.

So many are so fast to skip the human experience element of life that they're turning themselves into mere prompt generators, happy to regurgitate others' knowledge without feeling or understanding.

For this, you might not care to gain meaningful experience, and as a conscious choice, that's fine. But there are an increasing number of developer and developer adjacent people who reach for the LLM first. Who don't understand "their" contributions to projects.

The haters are those of us who have to deal with this slop, and the sloppy people submitting it without thought, care or understanding.

tarsinge•2mo ago
I don't know, the kind of developers doing this are the same that would copy paste from stack overflow in the past. Because if you are interested in knowledge and human experience, LLMs or not you are curious about what you read and take ownership of what you produce. In the past these developers would have created the same slop but at a much slower pace, LLMs are just enabling them to do it faster.
oliwarner•2mo ago
It's the speed that stops you learning anything. Piecing together a dozen scripts from a dozen sources and making them work requires some work. You have to debug it. Some of this knowledge sticks.

It's not just a tech thing. Kid's learning suffering at their ability to just crank out essays they've never even read.

LLMs and AI are getting better. We doomers aren't decrying the technical advances they're making, we're appalled at the human cost of giving people a knowledge-free route through life.

fragmede•2mo ago
Not just knowledge free, but thought free. Instead of thinking deeply about something and coming to a conclusion yourself, just offload it to an AI to do it for you. Something challenges you in life? No worries, AI is here. Not just to answer your questions, but think for you. What kind of world is that? What kind of society will that lead to?
esseph•2mo ago
Similar things were said about the calculator.
oliwarner•2mo ago
And rightly so. If you use a calculator instead of learning the fundamentals of how to do maths, you don't learn. This is reflected on them not being touched until 11+ in the UK, and even then there are exams where they are forbidden.

I'm not against the calculator and I'm not against LLMs. I'm against people choosing ignorance.

esseph•2mo ago
You're going to be fighting an uphill battle for as long as humanity exists.

Conservation of Energy rears its head in fascinating ways.

oliwarner•2mo ago
Again, I'm not fighting the use of tools, rather their use as a substitute for knowledge.

Practically every educational institution is with me here, so uphill it may be, but it's an important battle for the future of mankind, and recognised as such. We've long joked about a quick slide into Idiocracy (2006), but substituting learning for what a LLM can answer for you is how you rapidly deskill and get there.

In this case, "ragequittah" up top doesn't know how their router/firewall is actually configured. That might work out okay for them but they (and people like them) don't even know what they don't know.

ragequittah•2mo ago
I know exactly how my firewall and router are configured though. I didn't do it blindly and would often hone what the AI gave me. I can see the argument if someone did do it blindly, but I'd wager very few are.

I didn't have to very much because pfsense that I've been using forever and opnsense are basically the same, but if I wasn't sure on why I was setting something the way I was setting it i would ask for clarification with sources. This just amounts to an extremely powerful google search tailored exactly to my situation.

I think everyone pictures ai users as drooling idiots who copy / paste without thinking. While I'm sure that exists you can use AI to learn and it works quite well. To me it feels like how a librarian might feel when people started using the internet to learn because if you don't use the dewey decimal system you aren't really learning anything.

ragequittah•2mo ago
I set up opnsense and xcp-ng. The idea thay I now don't understand those front ends is absurd. I'd already learned the underlying networking and Linux stuff years ago I just needed to know where the right nibs are.

And you can easily learn deeply with AI just ask it deeper questions. I do this all the time. I did this several times in this network setup when I did encounter something I didn't understand. If you aren't curious you won't learn, if you are you'll learn faster than any other method out there.

fishmicrowaver•2mo ago
I think what I'll miss from the SO approach to research is encountering that wall of text someone bothered to post giving a deep explanation of the problem space and potential solutions. Sometimes I just needed the fast answer to some configuration problem, but it was always worth the extra 20-30 minutes to read through and really understand those high effort contributions.
ragequittah•2mo ago
Nobody is writing a wall of text about opnsense rules or unbound checkboxes. I already knew the fundamentals I just wanted to get it done. I'm not a novice I've been using firewalls forever. Xcp-ng for half a decade. I just needed clarification on the differences.
tarsinge•2mo ago
> The AI is useless crowd truly makes me scratch my head.

I think it's because, past autocomplete, for AI to be useful professionally you need to already have a lot of background and experience in what you are using it for, in addition to engineering and project management to keep the scope on track. While demos with agents are impressive in practice autonomy is not there they need strong guidance, so it only works as very smart assistant. What you are describing is very representative of this.

If you don't have that level of seniority then you'll struggle to get value from AI because it'll be hard to guide and keep on track, also spotting and navigating errors and wrong thinking paths. You cannot use it as an assistant, only takes what it says at face value, and given it'll randomly be wrong it makes it useless.

NaomiLehman•2mo ago
I think most people commenting on HN have the expertise, no?

I use it like a book of openings in Chess. Advanced players also learn openings.

ragequittah•2mo ago
This is why I used it for something I already knew about I just needed clarification on. I could tell when it was wrong and it just wasn't often enough to worry about. I was wrong far more often than it was. And Google searches would be wrong way more often than me.
bgwalter•2mo ago
Feeling glad that one is insulated from the knowledgeable users that have trained the "AI" that stole their IP is just strange.

"AI" is also larger than plagiarizing Stackoverflow. Google AI answers on any topic, which most people use, are pretty poor.

Coming back to sysadmin/programming. There are many migration guides from pfsense to Opnsense, for example (note there are no mean people in that thread):

https://forum.opnsense.org/index.php?topic=32793.0

The estimates are days, which is not that different from a weekend.

OpenAI now basically has your firewall configuration and who knows what else, so I would not recommend using "AI" for such sensitive matters.

ragequittah•2mo ago
Openai doesn't care about my iot rules. They aren't going to hack my small home network. It's like saying the people who wrote the guide to set up an iot and guest network know your firewall rules if you follow the guide. Sure. I'd wager they probably know most of the rules for my admin lan too because they're self evident. And turns out most people configure unbound and dnsmasq in the same way too.

Moreover the fact that the AI knows my setup now makes it effortless to troubleshoot.

gorbot•2mo ago
I'm an idiot and I know nothing

But I wonder if there could be room for an ARM-like spec that Google could try and own and license but for AI chips. Arm is to risc-cpu as google-thing is to asic-aichip

Prolly a dumb idea, better to sell the chips or access to them?

eru•2mo ago
I'm not sure the chip spec (or instruction set) is the level of abstraction here?

Something like DirectX (or OpenGL) might be the better level to target? In practice, CUDA is that level of abstraction, but it only really works for Nvidia cards.

latchkey•2mo ago
> CUDA is that level of abstraction, but it only really works for Nvidia cards.

There are people actively working on that.

https://scale-lang.com/

karmakaze•2mo ago
It's not that it only works on Nvidia cards, it's only allowed to work on Nvidia cards. A non-clean room implementation of CUDA for other hardware has been done but is a violation of EULA (of the thing that was reverse engineered), copyright on the driver binary interface, and often patents. Nvidia aggressively sends cease-and-desist letters and threatens lawsuits (successfully killed ZLUDA, threatened others). It's an artificial (in a technical sense moat).
latchkey•2mo ago
Spectral just did a thread on that.

https://x.com/SpectralCom/status/1993289178130661838

eru•2mo ago
I don't think you can make the EULA bite here?

To circumvent: you have someone (who might be bound by the EULA, and is otherwise not affiliated with you) dump the data on the internet, and someone else (from your company) can find it there, without being bound by the EULA. Nvidia could only sue the first guy for violating the EULA.

However you are right, that copyright and patents still bite.

SpaghettiCthulu•2mo ago
> successfully killed ZLUDA

Did they? Sounds like AMD did that[^1] and that the project is continuing based on the pre-AMD codebase[^2].

[^1]: https://www.phoronix.com/news/AMD-ZLUDA-CUDA-Taken-Down

[^2]: https://www.phoronix.com/news/ZLUDA-Third-Life

karmakaze•2mo ago
Unless ZLUDA can show that it is a clean room re-implementation from a spec without contact with the CUDA libraries, it would be a bad place for AMD to place themselves. That could be reason enough to retract voluntarily before any bad press. Such a thing is possible but likely much harder than Compaq re-implementing IBM PC BIOS.
pjmlp•2mo ago
Not really, because as usual people misunderstand what CUDA is.

CUDA is hardware designed according to the C++ memory model, with first tier support for C, C++, Fortran and Python GPGPU DSLs, with several languages also having a compiler backend for PTX.

Followed by IDE integration, a graphical debugger and profiler for GPU workloads, and an ecosystem of libraries and frameworks.

Saying just use DirectX, Vulkan, OpenGL instead, misses the tree from the forest that is CUDA, and why researchers rather use CUDA, than deal with yet another shading language or C99 dialect, without anything else.

amypetrik8•2mo ago
they tried selling years ago, not much happened, coral

now they dont want to sell them - why power local inference when they can saubscribe forever and you get their juicy datas too

jeffbee•2mo ago
These are only available in Iowa on GCP, which to me raises this question: do they have them all over the world for their own purposes, or does this limited geography also mean that users of Google AI features get varied experiences depending on their location?
wmf•2mo ago
Running on v6 vs v7 should just be different performance.
jeffbee•2mo ago
If a search feature runs on a deadline then different performance could be observable as more work done in 100ms or whatever unit of time.
londons_explore•2mo ago
Things needing the most compute (llm's, image and video generation) tend not to be latency sensitive.

100ms of latency is nothing when added to 10 seconds of generation time.

aurareturn•2mo ago
I think we need an analysis of tokens/$1 and tokens/second for Nvidia Blackwell vs Ironwood.
ipnon•2mo ago
It depends on how they’re utilized , especially at these scales, you have to squeeze every bit out.
htrp•2mo ago
So what's the difference between their announcement in april and now?