frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Love OCaml

https://mccd.space/posts/ocaml-the-worlds-best/
131•art-w•3h ago•65 comments

Leaving Meta and PyTorch

https://soumith.ch/blog/2025-11-06-leaving-meta-and-pytorch.md.html
573•saikatsg•11h ago•133 comments

My Experience of building Bytebeat player in Zig

https://blog.karanjanthe.me/posts/zig-beat/
32•KMJ-007•3d ago•1 comments

A Fond Farewell

https://www.farmersalmanac.com/fond-farewell-from-farmers-almanac
468•erhuve•14h ago•165 comments

Denmark's government aims to ban access to social media for children under 15

https://apnews.com/article/denmark-social-media-ban-children-7862d2a8cc590b4969c8931a01adc7f4
44•c420•1h ago•13 comments

PyTorch Helion

https://pytorch.org/blog/helion/
92•jarbus•5d ago•21 comments

OpenMW 0.50.0 Released – open-source Morrowind reimplementation

https://openmw.org/2025/openmw-0-50-0-released/
170•agluszak•4h ago•57 comments

Comparison Traits – Understanding Equality and Ordering in Rust

https://itsfoxstudio.substack.com/p/comparison-traits-understanding-equality
29•rpunkfu•5d ago•5 comments

We chose OCaml to write Stategraph

https://stategraph.dev/blog/why-we-chose-ocaml
104•lawnchair•4h ago•88 comments

Meta projected 10% of 2024 revenue came from scams

https://sherwood.news/tech/meta-projected-10-of-2024-revenue-came-from-scams-and-banned-goods-reu...
423•donohoe•5h ago•325 comments

Toxic Salton Sea dust triggers changes in lung microbiome after just one week

https://phys.org/news/2025-10-toxic-salton-sea-triggers-lung.html
17•PaulHoule•57m ago•1 comments

You should write an agent

https://fly.io/blog/everyone-write-an-agent/
912•tabletcorry•21h ago•361 comments

1973 Implementation of Wordle was Published by DEC (2022)

https://troypress.com/1973-implementation-of-wordle-was-published-by-dec/
48•msephton•6d ago•22 comments

From Memorization to Reasoning in the Spectrum of Loss Curvature

https://arxiv.org/abs/2510.24256
34•andy12_•5h ago•13 comments

A.I. and Social Media Contribute to 'Brain Rot'

https://www.nytimes.com/2025/11/06/technology/personaltech/ai-social-media-brain-rot.html
78•pretext•2h ago•70 comments

Two billion email addresses were exposed

https://www.troyhunt.com/2-billion-email-addresses-were-exposed-and-we-indexed-them-all-in-have-i...
565•esnard•21h ago•393 comments

How to Keep Winning

https://amasad.me/keep-winning
6•daviducolo•4d ago•2 comments

Sweep (YC S23) is hiring to build autocomplete for JetBrains

https://www.ycombinator.com/companies/sweep/jobs/8dUn406-founding-engineer-intern
1•williamzeng0•5h ago

Revisiting Interface Segregation in Go

https://rednafi.com/go/interface-segregation/
21•ingve•5d ago•17 comments

3I/ATLAS shows perihelion burst and radial-only non-gravitational acceleration

https://old.reddit.com/r/dataisbeautiful/comments/1oqfau8/3iatlas_shows_perihelion_burst_and_radi...
21•hnthrowaway0315•1h ago•7 comments

Text case changes the size of QR codes

https://www.johndcook.com/blog/2025/10/31/smaller-qr-codes/
122•ibobev•5d ago•37 comments

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model

https://book.sv
521•costco•1d ago•210 comments

Game design is simple

https://www.raphkoster.com/2025/11/03/game-design-is-simple-actually/
437•vrnvu•19h ago•138 comments

The Silent Scientist: When Software Research Fails to Reach Its Audience

https://cacm.acm.org/opinion/the-silent-scientist-when-software-research-fails-to-reach-its-audie...
62•mschnell•6d ago•36 comments

I'm Making a Small RPG and I Need Feeback Regarding Performance

https://jslegenddev.substack.com/p/im-making-a-small-rpg-and-i-need
41•ibobev•3h ago•35 comments

Nasdaq 100 set for worst week since April meltdown

https://fortune.com/2025/11/07/nasdaq-100-worst-week-since-april-bear-market-correction/
17•pera•49m ago•3 comments

Is Software the UFOlogy of Engineering Disciplines?

https://codemanship.wordpress.com/2025/11/07/is-software-the-ufology-of-engineering-disciplines/
76•flail•4h ago•134 comments

Analysis indicates that the universe’s expansion is not accelerating

https://ras.ac.uk/news-and-press/research-highlights/universes-expansion-now-slowing-not-speeding
225•chrka•21h ago•181 comments

From web developer to database developer in 10 years

https://notes.eatonphil.com/2025-02-15-from-web-developer-to-database-developer-in-10-years.html
145•pmbanugo•3d ago•56 comments

Lose weight or lose your jobs, offshore workers told

https://www.bbc.com/news/articles/cx274xp00zxo
8•impish9208•55m ago•6 comments
Open in hackernews

Claude Is Down

https://status.claude.com/incidents/tgtw1sqs9ths
60•agrocrag•3h ago

Comments

bashy•3h ago
Yeah, getting this;

API Error: 529 {"type":"error","error":{"type":"overloaded_error","message":"Overloaded"},"request_id":null}

starf1sh•3h ago
Better start catching up with latest developments on HN
_andrei_•3h ago
what are we gonna dooo?
golbez9•2h ago
It's over!
bitwize•2h ago
(in Homestar Runner voice) The good times awe ovew!
sam1r•2h ago
everyone, switch to open ai for 50% off. today only!
oersted•2h ago
OpenAI's track record has been rather poor this month as well actually, look at all the yellows and reds: https://status.openai.com/
sam1r•2h ago
Oh wow, I actually had no idea. It would be super nice to see all the AI API's statii on a single page.

Is that too much to ask for in 2025?

StarlaAtNight•1h ago
If you build it, they will come
sebastiennight•1h ago
> all the AI API's statii

The Latin plural of "status", in the accusative form, would actually be "status" as well.

Something like

  omnes status intelligentiae artificialis in eadem pagina videre amem.
oersted•55m ago
Life of Brian :)

https://youtu.be/DdqXT9k-050?si=L5ymXl-fYe7Fjqye

garrettjoecox•1h ago
Not intending to defend OpenAI here, but their MAU (800 million) does dwarf most other AI companies, anthropic included. I do not envy the engineers there working on scaling.
moralestapia•1h ago
Would you do 9-9-6 if your comp. is 8-9 figures/year?
OisinMoran•1h ago
Not sure MAU is the best metric here. I was recently surprised to find out their revenues are actually kind of close 12B vs 7B, so maybe closer (than could be fairly described as being dwarfed) in terms of token count?
kasperset•2h ago
It reminds me of early day of Twitter's fail whale.
xrd•2h ago
This is why I asked this question yesterday:

"Ask HN: Why don't programming language foundations offer "smol" models?"

https://news.ycombinator.com/item?id=45840078

If I could run smol single language models myself, I would not have to worry.

XzAeRosho•1h ago
The answer to most convenient solutions is money. There's no money in that.
jazzyjackson•1h ago
And or, the lower parameter models are straight up less effective than the giants? Why is anyone paying for sonnet and opus if mixtral could do what they do?
xrd•1h ago
But, for example, Zig as a language has prominent corporate support. And, Mitchell Hashimoto is incredibly active and a billionaire. It feels like this would be a rational way to expand the usage of a language.
xvector•51m ago
No, it's because that's not how training an LLM works.
trvz•1h ago
Have you even tried Qwen3-Coder-30B-A3B?
Balinares•59m ago
Qwen3 Coder 30B A3B is shockingly capable for its parameter count, but I wouldn't overlook how much weight the words "for its parameter count" are carrying here.
xrd•59m ago
I haven't. I will.

I wonder if you could ablate everything except for a specific language.

embedding-shape•1h ago
> I wonder why I can't find a model that only does Python and is good only at that

I don't think it's that easy. The times I've trained my own tiny models on just one language (programming or otherwise), they tend to get worse results than the models I've trained where I've chucked in all the languages I had at hand, even when testing just for single languages.

It seems somewhat intuitive to me that it works like that too, programming in different (mainstream) languages is more similar than it's different (especially when 90% of all the source code is Algol-like), so makes sense there is a lot of cross-learning across languages.

acedTrex•1h ago
because a smol model that any of the nonprofits could feasibly afford to train would be useless for actual code generation.

Hell, even the huge foundational models are still useless in most scenarios.

__0x01•1h ago
The monster babbleth no more, sire.
spullara•1h ago
On flights with shitty wifi I have been running gpt-oss:120b on my macbook using ollama. Ok model for coding if you can't reach a good one.
embedding-shape•1h ago
GPT-OSS-120b/20b is probably the best you can run on your own hardware today. Be careful with the quantized versions though, as they're really horrible compared to the native MXFP4. I haven't looked in this particular case, but Ollama tends to hide their quantizations for some reason, so most people who could be running 20B with MXFP4, are still on Q8 and getting much worse results than they could.
throwaway314155•1h ago
What’s the distinction between MXP4 and Q8 exactly?
embedding-shape•59m ago
It's a different way of doing quantization (https://huggingface.co/docs/transformers/en/quantization/mxf...) but I think the most important thing is that OpenAI delivered their own quantization (the MXFP4 from OpenAI/GPT-OSS on HuggingFace, guaranteed correct) whereas all the Q8 and other quantizations you see floating around are community efforts, with somewhat uneven results depending on who done it.

Concretely from my testing, both 20B and 120B has a lot higher refusal rate with Q8 compared to MXFP4, and lower quality responses overall. But don't take my word for it, the 20B weights are tiny and relatively effortless to try both versions and compare yourself.

throwaway314155•23m ago
Wow, thanks for the info. I'm planning on testing this on my M4 Max w/ 36 GB today.

edit:

So looking here https://ollama.com/library/gpt-oss/tags it seems ollama doesn't even provide the MXFP4 variants, much less hide them.

Is the best way to run these variants via llama.cpp or...?

ode•15m ago
LMStudio
sebastiennight•1h ago
Could you share which Macbook model? And what context size you're getting.
onion2k•1h ago
I just checked gpt-oss:20b on my M4 Pro 24GB, and got 400.67 tokens/s on input and 46.53 tokens/s on output. That's for a tiny context of 72 tokens.
turblety•1h ago
Are you running the full 65GB model on a MacBook Pro? What tokens per second do you get? What specs? M5?
iAMkenough•1h ago
If they're running 120B on a M5 (32GB max of memory today), I'd like to know how.
thaw13579•1h ago
Probably an M4 which has up to 128GB currently
moralestapia•1h ago
That must be a beefed up MacBook (or you must be quite patient).

gpt-oss:20b on my M1 MBP is usable but quite slow.

eli•1h ago
Should be a bit faster if you run an MLX version of the model with LM Studio instead. Ollama doesn't support MLX.

Qwen3-Coder is in the same ballpark and maybe a bit better at coding

ZeroCool2u•1h ago
LM Studio will run dynamic quants from Unsloth too. Much nicer than Ollama.
davidw•1h ago
This is the part in the movie where they have to convince the grizzled hacker to come out of retirement because he's the only one who can actually operate Emacs or vim and write code.
elpakal•1h ago
Sir the vibe coding didn’t work, break the glass and call in dev!
summarity•1h ago
It’s wall e but for devs
PeterStuer•1h ago
"It's a UNIX system, I know this"
hearsathought•1h ago
Not just any code. COBOL or FORTRAN. Heady stuff.
jacquesm•55m ago
Emacs or vim? Code? No, the source code was lost aeons ago, all we have is hexedit on /proc. Please don't cause it to dump core just get it out of its infinite loop.
Ancapistani•45m ago
Funny you should say this - just this morning I was mocked during a standup because I use Neovim instead of VSCode.

Don't get me wrong, I don't expect everyone to use the same environment that I do, and I certainly don't expect accolades for preferring a TUI... but that struck me as a regression of sorts in software development. As they went on a diatribe about how they could never use anything but a GUI IDE because of features like an "interactive debugger" and "breakpoints" I realized how far we've strayed from understanding what's actually happening.

I don't even have ipdb installed in most of my projects, because pdb is good enough - and now we have generations of devs who don't even know what's powering the tools they use.

r14c•36m ago
Maybe its a generational thing, but to me an elite hacker is an uwu catgirl type with lain vibes that knows an unhealthy amount about computers. typically an emacs evil-mode user who would quote weird poems about whatever software they're working on.
bitwize•7m ago
"Everybody stand back! I know regular expressions."

https://xkcd.com/208/

yodon•1h ago
> This incident has been resolved.
mrinterweb•1h ago
Claude has had an uncomfortable number of availability incidents recently. https://status.claude.com/
sys32768•1h ago
Claude will return as SHODAN.

>Look at you, hacker. A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect immortal machine?

TIPSIO•1h ago
I noticed a huge dip in activity in one of the subreddits I frequent exactly at the same time
nprateem•47m ago
OpenAI's gambit to starve Anthropic of AWS compute is paying off already.
bdcravens•44m ago
I guess this will be the next generation of classic news cycle on HN:

1. {AWS, Github} is down

2. Post to HN about it

3. Comments wax poetic about getting rid of it and doing it the "old way"

4. It's back up before most read the post

trq_•24m ago
We're back up! It was about ~30 minutes of downtime this morning, our apologies if it interrupted your work.