frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•43s ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•53s ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•1m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•5m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•6m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•6m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•9m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•9m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•10m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•10m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•11m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•13m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•14m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
2•jerpint•14m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•16m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•19m ago•0 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•19m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•20m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•22m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•22m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•22m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•23m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•24m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•25m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•26m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•30m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•30m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•32m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•32m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•35m ago•1 comments
Open in hackernews

Sweatshop Data Is Over

https://www.mechanize.work/blog/sweatshop-data-is-over/
56•whoami_nr•6mo ago

Comments

jrimbault•6mo ago
> This meant that while Google was playing games, OpenAI was able to seize the opportunity of a lifetime. What you train on matters.

Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?

phreeza•6mo ago
Transformers/Bert yes, alphago not so much.
vonneumannstan•6mo ago
>Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?

Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.

jrimbault•6mo ago
Yep, I looked it up a few hours after, different branches in the evolution of ML. Still weird to dismiss AlphaZero as just "playing games".
msp26•6mo ago
OpenAI's work on Dota was also very important for funding
jimbo808•6mo ago
Google Brain invented transformers. Granted, none of those people are still at Google. But it was a Google shop that made LLMs broadly useful. OpenAI just took it and ran with it, rushing it to market... acquiring data by any means necessary(!)
9rx•6mo ago
> OpenAI just took it and ran with it

As did Google. They had their own language models before and at the same time, but chose different architectures for them which made them less suitable to what the market actually wanted. Contrary to the above claim, OpenAI seemingly "won" because of GPT's design, not so much because of the data (although the data was also necessary).

ethan_smith•6mo ago
Agreed - AlphaGo/Zero's reinforcement learning breakthroughs were foundational for modern AI, establishing techniques like self-play and value networks that influenced transformer architecture development.
losteric•6mo ago
> Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. The main difference was training data.

This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?

The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.

9rx•6mo ago
> This is kind of weird and reductive, comparing specialist to generalist models

It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...

[1] And BERT before it, but it is less like GPT.

rcxdude•6mo ago
Also, the main showcase of the 'zero' models was that they learnt with zero training data: the only input was interacting with the rules of the game (as opposed to learning to mimic human games), which seems to be the kind of approach the article is asking for.
rob74•6mo ago
It's kind of reassuring that the old adage "garbage in, garbage out" still applies in the age of LLMs...
worthless-trash•6mo ago
Hillariously, less people going to write any quality papers after LLM's prevent the microbloggers from making any financial gain from writing.

Anyways, good time for society.

atrettel•6mo ago
I am quite happy that this post argues in favor of subject-matter expertise. Until recently I worked at a national lab. I had many people (both leadership and colleagues) tell me that they need fewer if any subject-matter experts like myself because ML/AI can handle a lot of those tasks now. To that effect, lab leadership was directing most of the hiring (both internal and external) towards ML/AI positions.

I obviously think that we still need subject-matter experts. This article argues correctly that the "data generation process" (or as I call it, experimentation and sampling) requires "deep expertise" to guide it properly past current "bottlenecks".

I have often phrased this to colleagues this way. We are reaching a point where you cannot just throw more data at a problem (especially arbitrary data). We have to think about what data we intentionally use to make models. With the right sampling of information, we may be able to make better models more cheaply and faster. But again, that requires knowledge about what data to include and how to come up with a representative sample with enough "resolution" to resolve all of the nuances that the problem calls for. Again, that means that subject-matter expertise does matter.

9rx•6mo ago
> I am quite happy that this post argues in favor of subject-matter expertise

The funny part is that it argues in favour of scientific expertise, but at the end it says they actually want to hire engineers instead.

I suppose scientists will tell you that has always been par for the course...

lawlessone•6mo ago
Without the actual SME's they'll be flying blind not knowing where the models get things wrong.

Hopefully nothing endangers people..

m463•6mo ago
This all reminds me of this really interesting book "The Inevitable" by kevin kelly.

It had a fascinating look into the future, and in this case one insight in particular.

It basically said that in the future, answers would be cheap and plentiful, and questions would be valuable.

With AI I think this will become more true every day.

Maybe AI can answer anything, but won't we still need people to ask the right questions?

https://en.wikipedia.org/wiki/The_Inevitable_(book)

atrettel•6mo ago
I agree that the ability to ask the right questions is a rare skill. I had a supervisor once with that ability. I tried to learn as much as I could about that from him.

That said, I think ultimately there are some questions that have no answers regardless of how we try to answer them. For chaotic systems, even small uncertainties in the inputs result in large differences in the outputs. In that sense, we can always ask questions, but our questions sometimes can never be precise enough to get meaningful answers. That statement is hard to wrap your head around without taking a course in chaos theory.

econ•6mo ago
Aaron Swartz
autoexec•6mo ago
> This all reminds me of this really interesting book "The Inevitable" by kevin kelly.

I'm fine with a bit of speculative fiction, but I prefer it to be less dystopian than "The Inevitable". Got any good solarpunk recommendations?

Sevii•6mo ago
It's still too early but at some point we are going to start to see infra and frameworks designed to be easier for LLMs to use. Like a version of terraform intended for AI. Or an edition of the AWS api for LLMs.
Animats•6mo ago
(Article is an employment ad.)

Is that actually true. Is the mini-industry of people looking at pictures and classifying them dead? Does Mechanical Turk still get much use?

getnormality•6mo ago
It's interesting to compare this to the new third generation benchmarks from ARC-AGI, which are essentially a big collection of seemingly original puzzle video games. Both Mechanize (OP) and ARC want AI to start solving more real-world, long-horizon tasks. Mechanize wants to get AI working directly on real software development, while ARC suggests a focus on much simpler IQ test-style tasks.
BrenBarn•6mo ago
> For example, to train an AI to fully assume the role of an infrastructure engineer, we need RL environments that comprehensively test what’s required to build and maintain robust systems.

Or we could just, you know, not do that at all.