frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Learning Music with Strudel

https://terryds.notion.site/Learning-Music-with-Strudel-2ac98431b24180deb890cc7de667ea92
144•terryds•6d ago•30 comments

Mistral 3 family of models released

https://mistral.ai/news/mistral-3
266•pember•1h ago•78 comments

Nixtml: Static website and blog generator written in Nix

https://github.com/arnarg/nixtml
38•todsacerdoti•1h ago•6 comments

Addressing the adding situation

https://xania.org/202512/02-adding-integers
197•messe•5h ago•59 comments

Advent of Compiler Optimisations 2025

https://xania.org/202511/advent-of-compiler-optimisation
244•vismit2000•7h ago•33 comments

Show HN: Marmot – Single-binary data catalog (no Kafka, no Elasticsearch)

https://github.com/marmotdata/marmot
42•charlie-haley•1h ago•7 comments

YesNotice

https://infinitedigits.co/docs/software/yesnotice/
44•surprisetalk•1w ago•24 comments

A series of vignettes from my childhood and early career

https://www.jasonscheirer.com/weblog/vignettes/
87•absqueued•4h ago•51 comments

Python Data Science Handbook

https://jakevdp.github.io/PythonDataScienceHandbook/
84•cl3misch•4h ago•18 comments

Peter Thiel's Apocalyptic Worldview Is a Dangerous Fantasy

https://jacobin.com/2025/11/peter-thiel-palantir-apocalypse-antichrist
81•robtherobber•30m ago•27 comments

Apple Releases Open Weights Video Model

https://starflow-v.github.io
342•vessenes•11h ago•108 comments

What will enter the public domain in 2026?

https://publicdomainreview.org/features/entering-the-public-domain/2026/
395•herbertl•13h ago•252 comments

YouTube increases FreeBASIC performance (2019)

https://freebasic.net/forum/viewtopic.php?t=27927
120•giancarlostoro•2d ago•23 comments

I Designed and Printed a Custom Nose Guard to Help My Dog with DLE

https://snoutcover.com/billie-story
13•ragswag•2d ago•1 comments

Comparing AWS Lambda ARM64 vs. x86_64 Performance Across Runtimes in Late 2025

https://chrisebert.net/comparing-aws-lambda-arm64-vs-x86_64-performance-across-multiple-runtimes-...
91•hasanhaja•7h ago•41 comments

DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]

https://huggingface.co/deepseek-ai/DeepSeek-V3.2/resolve/main/assets/paper.pdf
906•pretext•1d ago•432 comments

India orders smartphone makers to preload state-owned cyber safety app

https://www.reuters.com/sustainability/boards-policy-regulation/india-orders-mobile-phones-preloa...
828•jmsflknr•1d ago•609 comments

Beej's Guide to Learning Computer Science

https://beej.us/guide/bglcs/
266•amruthreddi•2d ago•96 comments

Zig's new plan for asynchronous programs

https://lwn.net/SubscriberLink/1046084/4c048ee008e1c70e/
69•messe•2h ago•54 comments

Fallout 2's Chris Avellone describes his game design philosophy

https://arstechnica.com/gaming/2025/12/fallout-2-designer-chris-avellone-recalls-his-first-forays...
19•LaSombra•57m ago•3 comments

An LED panel that shows the aviation around you

https://github.com/AxisNimble/TheFlightWall_OSS
58•yzydserd•5d ago•11 comments

How Brian Eno Created Ambient 1: Music for Airports (2019)

https://reverbmachine.com/blog/deconstructing-brian-eno-music-for-airports/
138•dijksterhuis•9h ago•74 comments

Proximity to coworkers increases long-run development, lowers short-term output (2023)

https://pallais.scholars.harvard.edu/publications/power-proximity-coworkers-training-tomorrow-or-...
107•delichon•2h ago•74 comments

Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

https://github.com/runmat-org/runmat
9•nallana•1h ago•2 comments

Lazier Binary Decision Diagrams for set-theoretic types

https://elixir-lang.org/blog/2025/12/02/lazier-bdds-for-set-theoretic-types/
21•tvda•4h ago•2 comments

Rootless Pings in Rust

https://bou.ke/blog/rust-ping/
95•bouk•9h ago•68 comments

Tom Stoppard has died

https://www.bbc.com/news/articles/c74xe49q7vlo
149•mstep•2d ago•46 comments

Reverse math shows why hard problems are hard

https://www.quantamagazine.org/reverse-mathematics-illuminates-why-hard-problems-are-hard-20251201/
147•gsf_emergency_6•14h ago•30 comments

After Windows Update, Password icon invisible, click where it used to be

https://support.microsoft.com/en-us/topic/august-29-2025-kb5064081-os-build-26100-5074-preview-3f...
143•zdw•14h ago•148 comments

Codex, Opus, Gemini try to build Counter Strike

https://www.instantdb.com/essays/agents_building_counterstrike
269•stopachka•3d ago•107 comments
Open in hackernews

OpenAI declares 'code red' as Google catches up in AI race

https://www.theverge.com/news/836212/openai-code-red-chatgpt
47•goplayoutside•1h ago

Comments

ChrisArchitect•1h ago
Source: https://www.wsj.com/tech/ai/openais-altman-declares-code-red... (https://news.ycombinator.com/item?id=46118396)
skywhopper•1h ago

    There will be a daily call for those tasked
    with improving the chatbot, the memo said,
    and Altman encouraged temporary team transfers
    to speed up development.
Truly brilliant software development management going on here. Daily update meetings and temporary staff transfers. Well known strategies for increasing velocity!
lubujackson•43m ago
Don't forget scuttling all the projects the staff has been working overtime to complete so that they can focus on "make it better!" waves hands frantically
another_twist•35m ago
"The results of this quarter were already baked in a couple of quarters ago"

- Jeff Bezos

Quite right tbh.

giancarlostoro•31m ago
I've had ideas for how to improve all the different chatbots for like 3 years, nobodys has implemented any of them (usually my ideas get implemented in software somehow the devs read my mind, but AI seems to be stuck with the same UI for LLMs), none of these AI shops are ran by people with vision it feels like. Everyone's just remaking a slightly better version of SmarterChild.
theplatman•25m ago
i agree - it shows a remarkable lack of creativity that we're still stuck with a fairly subpar UX for interacting with these tools
whiplash451•24m ago
Did you open-source / publish these ideas?
simianwords•7m ago
I really want a UI that visualises branching. I would like to branch out of specific parts of the responses and continue the conversation there but also keep the original conversation. This seems to be a very standard feature but no one has developed it.
mlmonkey•31m ago
The beatings will continue until morale^H^H^H^H^H^H chatGPT improves...
trymas•23m ago
…someone even wrote a book about this. Something about “mythical men”… :D
simianwords•6m ago
Its easy to dismiss it but what would you do instead?
rf15•1h ago
This sounds like their medicine might be worse than what they're currently doing...
rappatic•57m ago
> the company will be delaying initiatives like ads, shopping and health agents, and a personal assistant, Pulse, to focus on improving ChatGPT

There's maybe like a few hundred people in the industry who can truly do original work on fundamentally improving a bleeding-edge LLM like ChatGPT, and a whole bunch of people who can do work on ads and shopping. One doesn't seem to get in the way of the other.

jasonthorsness•41m ago
ha what an incredible consumer-friendly outcome! Hopefully competition keeps the focus on improving models and prevents irritating kinds of monetization
another_twist•36m ago
If there's no monetization, the industry will just collapse. Not a good thing to aspire to. I hope they make money whilst doing these improvements.
thrance•19m ago
If there's no monetization, the industry will just collapse, except for Google, which is probably what they want.
Ericson2314•17m ago
If people pay for inference, that's revenue. Ads and stuff is plan B for inference being too cheap, or the value being too low.
ma2rten•36m ago
Delaying doesn't necessarily mean they stop working on it. Also it might be a question of compute resource allocation as well.
whiplash451•26m ago
The bottleneck isn’t the people doing the work but the leadership’s bandwidth for strategic thinking
tiahura•19m ago
How is strategic thinking going to produce novel ideas about neural networks?
ceejayoz•13m ago
The strategic thinking revolves around "how do we put ads in without everyone getting massively pissed?" sort of questions.
kokanee•11m ago
I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.
rob74•22m ago
I for one would say, the later they add the "ads" feature, the better...
logsr•10m ago
There are two layers here: 1) low level LLM architecture 2) applying low level LLM architecture in novel ways. It is true that there are maybe a couple hundred people who can make significant advances on layer 1, but layer 2 constantly drives progress on whatever level of capability layer 1 is at, and it depends mostly on broad and diverse subject matter expertise, and doesn't require any low level ability to implement or improve on LLM architectures, only understanding how to apply them more effectively in new fields. The real key thing is finding ways to create automated validation systems, similar to what is possible for coding, that can be used to create synthetic datasets for reinforcement learning. Layer 2 capabilities do feed back into improved core models, even if you have the same core architecture, because you are generating more and improved data for retraining.
techblueberry•9m ago
Far be it from me to backseat drive for Sam Altman, but is the problem really that the core product needs improvement, or that it needs a better ecosystem? I can't imagine people are choosing they're chatbots based on providing the perfect answers, it's what you can do with it. I would assume google has the advantage because it's built into a tool people already use every day, not because it's nominally "better" at generating text. Didn't people prefer chatgpt 4 to 5 anyways?
rashidujang•32m ago
> There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.

It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.

tiahura•21m ago
Also, google has plenty of (unmatched?) proprietary data and their own money tree to fuel the money furnace.
FinnKuhn•15m ago
As well as their own hardware and a steady cash flow to finance their AI endevours for longer.
dathinab•19m ago
the thought that this might be done one recommendation of ChatGPT has me rolling

think about it, with how much bad advice is out there in certain topics it's guaranteed that ChatGPT will promote common bad advice in many cases

amelius•19m ago
Imho it just shows how relatively simple this technology really is, and nobody will have a moat. The bubble will pop.
deelowe•15m ago
Not exactly. Infra will win the race. In this aspect, Google is miles ahead of the competition. Their DC solutions scale very well. Their only risk is that the hardware and low level software stack is EXTREMELY custom. They don't even fully leverage OCP. Having said that, this has never been a major problem for Google over their 20+ years of moving away from OTS parts.
simianwords•11m ago
amazing how the bubble pops either from the technology either being too simple or being too complex to make a profit
wlesieutre•15m ago
Besides, can't they just allocate more ChatGPT instances to accelerating their development?
woeirua•14m ago
Wait, shouldn't their internal agents be able to do all this work by now?
Fricken•31m ago
I take this code red as a red flag. Open AI should continue to concern itself with where it will be 5 years from now, not lose sight over concern about where it will 5 months from now.
theplatman•26m ago
open ai is at risk of complete collapse if it cannot fulfill its financial obligations. if people willing to give them money don't have faith in their ability to win the AI race anymore, then they're going out of business.
cj•20m ago
ChatGPT is riding on momentum (on the consumer side especially). Everything collapses if that momentum dies.
dylan604•7m ago
Back in the day before Adobe bought Macromedia, there was a constant back and forth between Illustrator and Freehand where each release would better the competitor at least until the competitor's next release.

Does anyone in AI think about 5 years from now?

whiplash451•27m ago
It’s actually code yellow
Phelinofist•24m ago
IMHO Gemini surpassed ChatGPT by quite a bit - I switched. Gemini is faster, the thinking mode gives me reliably better answers and it has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.
pengaru•24m ago
Surely they can just use AI to go faster and attend their daily calls for them...
poszlem•22m ago
To be honest, this is the first month in almost a year when I didn't pay for ChatGPT Pro and instead went for Gemini Ultra. It's still not there for programming, where I use Claude Max, but for my 'daily driver' (count this, advice on that, 'is this cancer or just a headache' kind of thing), Gemini has finally surpassed ChatGPT for me. And I used to consider it to be the worst of the bunch.

I used to consider Gemini the worst of the bunch, it constantly refused to help me in the past, but not only has it improved, ChatGPT seems to have gone down the 'nerfing' road where it now flat out refuses to do what I ask it to do quite often.

mrkramer•20m ago
Google is shivering! /s
dwa3592•18m ago
why couldn't GPT5.1 improve itself? Last I heard, it can produce original math and has phd level intelligence.
alecco•15m ago
OpenAI was founded to hedge against Google dominating AI and with it the future. It makes me sad how that was lost for pipe dreams (AGI) and terrible leadership.

I fear a Google dystopia. I hope DeepSeek or somebody else will counter-balance their power.

bryanlarsen•6m ago
That goal has wildly succeeded -- there are now several well financed companies competing against Google.

The goal was supposed to be an ethical competitor as implied by the word "Open" in their name. When Meta and the Chinese are the most ethical of the competitors, you know we're in a bad spot...

sometimes_all•14m ago
For regular consumers, Gemini's AI pro plan is a tough one to beat. The chat quality has gotten much better, I am able to share my plan with a couple more people in my family leading to proper individual chat histories, I get 2 TB of extra storage (which is also sharable), plus some really nice stuff like NotebookLM, which has been amazing for doing research. Veo/Nanobanana are nice bonuses.

It's easily worth the monthly cost, and I'm happy to pay - something which I didn't even consider doing a year ago. OpenAI just doesn't have the same bundle effect.

Obviously power users and companies will likely consider Anthropic. I don't know what OpenAI's actual product moat is any more outside of a well-known name.

theoldgreybeard•13m ago
You can't make a baby in 1 month with 9 women, Sam.
vivzkestrel•11m ago
In one of the Indian movies, there is a rather funny line that goes like this "tu jiss school se padh kar aaya hai mein uss school ka headmaster hoon". It would translate like this "The school from which you studied and came? I am the principal of that school". Looks like Google is about to show who the true principal is
spwa4•10m ago
We are in a pretty amazing situation. If you're willing to go down 10% in benchmark scores, you easily 25% your costs. Now with Deepseek 3.2 another shot across the bow.

But if the ML, if SOTA intelligence becomes basically a price war, won't that mean that Google (and OpenAI and Microsoft and any other big model) lose big? Especially Google, as the margin even Google cloud (famously a lot lower than Google's other businesses) requires to survive has got to be sizeable.