frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

An interactive map of FLock Cams

https://deflock.org/map#map=5/37.125286/-96.284180
173•anjel•1h ago•24 comments

MacBook Neo

https://www.apple.com/newsroom/2026/03/say-hello-to-macbook-neo/
1123•dm•5h ago•1457 comments

Making Firefox's right-click not suck with about:config

https://joshua.hu/firefox-making-right-click-not-suck
129•mmsc•2h ago•80 comments

Father claims Google's AI product fuelled son's delusional spiral

https://www.bbc.com/news/articles/czx44p99457o
22•tartoran•27m ago•1 comments

Something is afoot in the land of Qwen

https://simonwillison.net/2026/Mar/4/qwen/
322•simonw•4h ago•154 comments

Nobody Gets Promoted for Simplicity

https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/
696•aamederen•8h ago•404 comments

NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute

https://qlabs.sh/slowrun
53•sdpmas•2h ago•7 comments

Moss is a pixel canvas where every brush is a tiny program

https://www.moss.town/
77•smusamashah•9h ago•8 comments

Data Has Weight but Only on SSDs

https://cubiclenate.com/2026/03/04/data-has-weight-but-only-on-ssds-blathering/
21•LorenDB•1h ago•8 comments

“It turns out” (2010)

https://jsomers.net/blog/it-turns-out
181•Munksgaard•5h ago•64 comments

Roboflow (YC S20) Is Hiring a Security Engineer for AI Infra

https://roboflow.com/careers
1•yeldarb•2h ago

Who Writes the Bugs? A Deeper Look at 125,000 Kernel Vulnerabilities

https://pebblebed.com/blog/kernel-bugs-part2
43•MBCook•2h ago•8 comments

Faster C software with Dynamic Feature Detection

https://gist.github.com/jjl/d998164191af59a594500687a679b98d
21•todsacerdoti•1h ago•2 comments

Raspberry Pi Pico as AM Radio Transmitter

https://www.pesfandiar.com/blog/2026/02/28/pico-am-radio-transmitter
35•pesfandiar•3d ago•16 comments

Glaze by Raycast

https://www.glazeapp.com/
151•romac•6h ago•92 comments

Qwen3.5 Fine-Tuning Guide – Unsloth Documentation

https://unsloth.ai/docs/models/qwen3.5/fine-tune
180•bilsbie•8h ago•48 comments

My Favorite 39C3 Talks

https://asindu.xyz/my-favorite-39c3-talks/
11•max_•3d ago•2 comments

Libre Solar – Open Hardware for Renewable Energy

https://libre.solar
148•evolve2k•3d ago•45 comments

MyFirst Kids Watch Hacked. Access to Camera and Microphone

https://www.kth.se/en/om/nyheter/centrala-nyheter/kth-studenten-hackade-klocka-for-barn-1.1461249
78•jidoka•7h ago•21 comments

Agentic Engineering Patterns

https://simonwillison.net/guides/agentic-engineering-patterns/
438•r4um•15h ago•239 comments

The Space Race's Forgotten Theme Park

https://daily.jstor.org/the-space-races-forgotten-theme-park/
8•anarbadalov•2h ago•0 comments

Google ends its 30 percent app store fee and welcomes third-party app stores

https://www.engadget.com/apps/google-ends-its-30-percent-app-store-fee-and-welcomes-third-party-a...
20•_____k•34m ago•4 comments

RFC 9849. TLS Encrypted Client Hello

https://www.rfc-editor.org/rfc/rfc9849.html
242•P_qRs•12h ago•119 comments

TikTok will not introduce end-to-end encryption, saying it makes users less safe

https://www.bbc.com/news/articles/cly2m5e5ke4o
365•1659447091•18h ago•357 comments

Government grant-funded research should not be published in for-profit journals

https://www.experimental-history.com/p/the-one-science-reform-we-can-all
286•sito42•5h ago•125 comments

Emails to Outlook.com rejected due to a fault or overzealous blocking rules

https://www.theregister.com/2026/03/04/users_fume_at_outlookcom_email/
102•Bender•8h ago•67 comments

Motorola GrapheneOS devices will be bootloader unlockable/relockable

https://grapheneos.social/@GrapheneOS/116160393783585567
1157•pabs3•19h ago•472 comments

The 1,700-year-old megastructure history almost forgot

https://www.cnn.com/2026/02/28/travel/travel-news-jetavanaramaya-ephesus
15•simonebrunozzi•2d ago•2 comments

RE#: how we built the fastest regex engine in F#

https://iev.ee/blog/resharp-how-we-built-the-fastest-regex-in-fsharp/
171•exceptione•3d ago•60 comments

A CPU that runs entirely on GPU

https://github.com/robertcprice/nCPU
225•cypres•15h ago•110 comments
Open in hackernews

NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute

https://qlabs.sh/slowrun
53•sdpmas•2h ago

Comments

suddenlybananas•1h ago
Reminds me a fair bit of the BabyLM challenge. It would be good to give them a shout-out and see how this challenge differs.
sdpmas•1h ago
hey, it's Samip (behind the Slowrun repo). yeah that's a fair point, we will mention them in the blog. but there are a couple of major differences: 1. our emphasis is on using more compute to get better data efficiency. this is important because there are lots of hacky chances that will get lower loss, but when compared to general methods that leverage a lot of compute, they don't do so well. and you can already see how this emphasis on compute leads to different methods to BabyLM! 2. our reasoning behind the repo is not anything to do with how much data a child sees. and our dataset is not tailored towards that either. it's simple pretraining on random subset of the internet. we know there are better training algorithms that get lower loss on that data, and we are finding those.
soraki_soladead•1h ago
also, BabyLM is more of a conference track / workshop than an open-repo competition which creates a different vibe
archermarks•51m ago
Very cool idea. Interested to see how this progresses. One question: how worried are you about over-training on this particular dataset? i.e. instead of generalizing you lean more toward memorization? Obviously you leave out a validation set but since you're meta-optimizing the model itself by its performance on the validation dataset you're still at risk of over-fitting.
sdpmas•44m ago
yes, good point. right now, it's somewhat hard to overfit because the meta-optimization extracts tiny bits of information. but over time, we will switch the validation set to some other random subset of the FineWeb or even entirely OOD datasets!
lzaborowski•22m ago
I like the idea of flipping the constraint. Most ML benchmarks assume unlimited data and limited compute, so people optimize for speed.

If high-quality training data becomes the real bottleneck, then the interesting question is how much signal you can extract from the same dataset when compute is cheap.

navvyeanand•16m ago
Amazing job!