frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

GPT-5

http://openai.com/gpt-5
458•rd•54m ago•445 comments

Building Bluesky comments for my blog

https://natalie.sh/posts/bluesky-comments/
122•g0xA52A2A•1h ago•48 comments

GPT-5 for Developers

https://openai.com/index/introducing-gpt-5-for-developers
32•6thbit•48m ago•3 comments

Infinite Pixels

https://meyerweb.com/eric/thoughts/2025/08/07/infinite-pixels/
176•OuterVale•4h ago•39 comments

Lithium Reverses Alzheimer's in Mice

https://hms.harvard.edu/news/could-lithium-explain-treat-alzheimers-disease
69•highfrequency•2h ago•43 comments

How to Sell if Your User is not the Buyer

https://writings.founderlabs.io/p/how-to-sell-if-your-user-is-not-the
67•mooreds•2h ago•43 comments

Ditching GitHub (2024)

https://tomscii.sig7.se/2024/01/Ditching-GitHub
42•lr0•1h ago•38 comments

Foundry (YC F24) Is Hiring Staff Level Product Engineers

https://www.ycombinator.com/companies/foundry/jobs/jwdYx6v-founding-product-engineer
1•lakabimanil•53m ago

SUSE Donates USD 11,500 to the Perl and Raku Foundation

https://www.perl.com/article/suse-donates-to-tprf/
64•oalders•3h ago•18 comments

Laptop Support and Usability (LSU): July 2025 Report from the FreeBSD Foundation

https://github.com/FreeBSDFoundation/proj-laptop/blob/main/monthly-updates/2025-07.md
66•grahamjperrin•3h ago•34 comments

Jepsen: Capela dda5892

https://jepsen.io/analyses/capela-dda5892
29•aphyr•3h ago•0 comments

Monte Carlo Crash Course: Quasi-Monte Carlo

https://thenumb.at/QMC/
65•zote•3d ago•9 comments

Show HN: Browser AI agent platform designed for reliability

https://github.com/nottelabs/notte
10•ogandreakiro•42m ago•0 comments

New AI Coding Teammate: Gemini CLI GitHub Actions

https://blog.google/technology/developers/introducing-gemini-cli-github-actions/
180•michael-sumner•8h ago•76 comments

Benchmark Framework Desktop Mainboard and 4-node cluster

https://github.com/geerlingguy/ollama-benchmark/issues/21
3•geerlingguy•4m ago•0 comments

The Sunlight Budget of Earth

https://www.asimov.press/p/sunlight-budget
11•mailyk•1h ago•0 comments

GPT-5 System Card [pdf]

https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb52f/gpt5-system-card-aug7.pdf
54•6thbit•51m ago•11 comments

Emailing a one-time code is worse than passwords

https://blog.danielh.cc/blog/passwords
726•max__dev•15h ago•598 comments

Windows XP Professional

https://win32.run/
167•pentagrama•3h ago•107 comments

Arm Desktop: x86 Emulation

https://marcin.juszkiewicz.com.pl/2025/07/22/arm-desktop-emulation/
55•PaulHoule•5h ago•26 comments

PyPI: Preventing ZIP parser confusion attacks on Python package installers

https://blog.pypi.org/posts/2025-08-07-wheel-archive-confusion-attacks/
16•miketheman•1h ago•1 comments

Italy's Undercover Pizza Detectives

https://www.bbc.com/travel/article/20250801-italys-undercover-pizza-detectives
9•pseudolus•3d ago•0 comments

Sweatshop Data Is Over

https://www.mechanize.work/blog/sweatshop-data-is-over/
33•whoami_nr•3h ago•14 comments

More shell tricks: first class lists and jq

https://alurm.github.io/blog/2025-08-07-first-class-lists-in-shells.html
18•alurm•3h ago•6 comments

Koalas vs. Crows: An Evolutionary Theory of Software

https://ajmoon.com/posts/koalas-vs-crows-an-evolutionary-theory-of-software
9•alex-moon•3d ago•0 comments

The Whispering Earring (Scott Alexander)

https://croissanthology.com/earring
87•ZeljkoS•7h ago•50 comments

Global Trade Dynamics

https://alhadaqa.github.io/globaltradedynamics/
31•gmays•3h ago•5 comments

Claude Code IDE integration for Emacs

https://github.com/manzaltu/claude-code-ide.el
722•kgwgk•1d ago•239 comments

Hopfield Networks Is All You Need (2020)

https://arxiv.org/abs/2008.02217
21•liamdgray•2d ago•1 comments

Let's stop pretending that managers and executives care about productivity

https://www.baldurbjarnason.com/2025/disingenuous-discourse/
82•speckx•3h ago•42 comments
Open in hackernews

Sweatshop Data Is Over

https://www.mechanize.work/blog/sweatshop-data-is-over/
33•whoami_nr•3h ago

Comments

jrimbault•2h ago
> This meant that while Google was playing games, OpenAI was able to seize the opportunity of a lifetime. What you train on matters.

Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?

phreeza•2h ago
Transformers/Bert yes, alphago not so much.
vonneumannstan•2h ago
>Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?

Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.

msp26•1h ago
OpenAI's work on Dota was also very important for funding
jimbo808•1h ago
Google Brain invented transformers. Granted, none of those people are still at Google. But it was a Google shop that made LLMs broadly useful. OpenAI just took it and ran with it, rushing it to market... acquiring data by any means necessary(!)
9rx•51m ago
> OpenAI just took it and ran with it

As did Google. They had their own language models before and at the same time, but chose different architectures for them which made them less suitable to what the market actually wanted. Contrary to the above claim, OpenAI seemingly "won" because of GPT's design, not so much because of the data (although the data was also necessary).

ethan_smith•11m ago
Agreed - AlphaGo/Zero's reinforcement learning breakthroughs were foundational for modern AI, establishing techniques like self-play and value networks that influenced transformer architecture development.
losteric•2h ago
> Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. The main difference was training data.

This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?

The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.

9rx•2h ago
> This is kind of weird and reductive, comparing specialist to generalist models

It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...

[1] And BERT before it, but it is less like GPT.

rcxdude•1h ago
Also, the main showcase of the 'zero' models was that they learnt with zero training data: the only input was interacting with the rules of the game (as opposed to learning to mimic human games), which seems to be the kind of approach the article is asking for.
rob74•2h ago
It's kind of reassuring that the old adage "garbage in, garbage out" still applies in the age of LLMs...
atrettel•1h ago
I am quite happy that this post argues in favor of subject-matter expertise. Until recently I worked at a national lab. I had many people (both leadership and colleagues) tell me that they need fewer if any subject-matter experts like myself because ML/AI can handle a lot of those tasks now. To that effect, lab leadership was directing most of the hiring (both internal and external) towards ML/AI positions.

I obviously think that we still need subject-matter experts. This article argues correctly that the "data generation process" (or as I call it, experimentation and sampling) requires "deep expertise" to guide it properly past current "bottlenecks".

I have often phrased this to colleagues this way. We are reaching a point where you cannot just throw more data at a problem (especially arbitrary data). We have to think about what data we intentionally use to make models. With the right sampling of information, we may be able to make better models more cheaply and faster. But again, that requires knowledge about what data to include and how to come up with a representative sample with enough "resolution" to resolve all of the nuances that the problem calls for. Again, that means that subject-matter expertise does matter.

9rx•57m ago
> I am quite happy that this post argues in favor of subject-matter expertise

The funny part is that it argues in favour of scientific expertise, but at the end it says they actually want to hire engineers instead.

I suppose scientists will tell you that has always been par for the course...

Sevii•29m ago
It's still too early but at some point we are going to start to see infra and frameworks designed to be easier for LLMs to use. Like a version of terraform intended for AI. Or an edition of the AWS api for LLMs.