frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
186•ColinWright•1h ago•176 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
22•valyala•2h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
17•valyala•2h ago•1 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
158•alephnerd•2h ago•106 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
833•klaussilveira•22h ago•250 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
120•1vuio0pswjnm7•8h ago•150 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1061•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
81•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•58m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
490•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•73 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•259 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
226•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
40•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
10•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•33 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•83 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
275•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
288•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments
Open in hackernews

Domain Adaptation of Base Models + ShadowdarkQA Bench

https://gygaxtest.com/posts/continued_pretraining_for-rules/
17•pact_inference•8mo ago

Comments

palmfacehn•8mo ago
Isn't this a use case for a RAG?
pact_inference•8mo ago
definitely! However, my intuition is that correctly interpreting the rules pulled in context will require some basic understanding of the game system that pretraining would help with. Ultimately after training this base model for instruction-tuning and tool-use (to provide a search tool) I'll compare it against https://huggingface.co/Qwen/Qwen3-0.6B without any specific domain pretraining and see how it performs at rule adjudication. I expect the shadowdark-trained model will have better understanding of the rules, but there's only one way to find out.
palmfacehn•8mo ago
It is an interesting problem to solve. When reading, I noticed the model's ambiguity around terms like 4d6. At first I thought you might try editing your markup to describe the concept of dice more thoroughly. Ultimately I wonder if you might try having the model fill in data to be utilized by a hard coded combat system. Are you going to rely on the LLM for pseudorandom numbers? Concepts like turns and dice rolls could be abstractly defined in code and instantiated by the model.

The model might excel at creating character sheets, after you define a schema. From there you can validate the generated sheets against known lore. You could combine the story telling from the LLM with the formalized character schema to create campaigns. I'm not an expert here, but I suspect you might try asking the model to translate an existing fantasy story dataset into a series of narration/dialogue blocks and character sheets.

Without training, I've experimented with similar approaches for item generation using EBNF.

pact_inference•8mo ago
> Are you going to rely on the LLM for pseudorandom numbers?

Definitely! I'm going to start with instruction tuning it for basic question answering, and then add tools to allow it to search the markdown source to cite answers to rules questions. I think adding some dice tooling for proper character sheet creation would be an awesome task to test as well. I'm actually thinking a lot about what tasks I could try that are "trivially" programmatically verifiable in their correctness for stuff like GRPO, so I'm definitely going to use that idea.

> You could combine the story telling from the LLM with the formalized character schema to create campaigns. I'm not an expert here, but I suspect you might try asking the model to translate an existing fantasy story dataset into a series of narration/dialogue blocks and character sheets.

I think probably late this year I'll be able to work on that sort of thing. There's a really interesting approach to story generation https://arxiv.org/abs/2503.22828 here, but modifying ways to translate it into campaign relevant structured objects and "reward" that will take some experimentation.

jasonjmcghee•8mo ago
> I used the AdamW optimizer and selected a learning rate of 5e-5. I’ve seen learning rates of 5e-6 for pretraining and 5e-5 for finetuning. I would consider this closer to the latter - I don’t want to totally destroy the knowledge Qwen already had, I just want to add to it a bit.

Is this a typo? Maybe 5e-4 for pretraining?

Otherwise this goes against all the intuition I have around learning rates and catastrophic forgetting. (a smaller learning rate causing knowledge degredation)

pact_inference•8mo ago
whoops, definitely a typo! It should be 5e-4 for as the base "pretraining" LR, you're absolutely correct.

your intuition is sound, but my fingers are not.