frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
256•theblazehen•2d ago•85 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
26•AlexeyBrin•1h ago•2 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
706•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
69•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•47m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
45•speckx•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Welcome to the Room – A lesson in leadership by Satya Nadella

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
240•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
238•dmpetrov•16h ago•126 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•149 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
389•ostacke•22h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
304•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
23•bikenaga•3d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
26•1vuio0pswjnm7•2h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•461 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

How to explain Generative AI in the classroom

https://dalelane.co.uk/blog/?p=5847
78•thinkingaboutit•1w ago

Comments

th0ma5•1w ago
This is really well put together, it should probably include more about ethics, hidden bias, etc.
Uehreka•1w ago
I couldn’t find it easily, what age range is this intended for? The images make it seem elementary-school-ish, but I’m not sure if elementary school kids have the foundations for interpreting scatterplots, let alone scatterplots with logarithmic axes. I’ve been out of education for a while though, so maybe I’m misremembering.
thfuran•1w ago
Logarithms are high school.
d1sxeyes•6d ago
Given the author’s domain is .co.uk, and there’s a reference to part of the UK at the bottom, I’d say this is likely aimed at an average Y10/11 (15-16 y.o.). It could perhaps be used with more able kids lower down the school, but I doubt it would be accessible to any under the age of 13.
peyton•1w ago
This is great and has lots of practical stuff.

Some of the takeaways feel over-reliant on implementation details that don’t capture intent. E.g. something like “the LLM is just trying to predict the next word” sort of has the explanatory power of “your computer works because it’s just using binary”—like yeah, sure, practically speaking yes—but that’s just the most efficient way to lay out their respective architectures and could conceivably be changed from under you.

I wonder if something like neural style transfer would work as a prelude. It helps me. Don’t know how you’d introduce it, but with NST you have two objectives—content loss and style loss—and can see pretty quickly visually how the model balances between the two and where it fails.

The bigger picture here is that people came up with a bunch of desirable properties they wanted to see and a way to automatically fulfill some of them some of the time by looking at lots of examples, and it’s why you get a text box that can write like Shakespeare but can’t tell you whether your grandma will be okay after she went to the hospital an hour ago.

crooked-v•1w ago
Complicated-enough LLMs also are aboslutely doing a lot more than "just trying to predict the next word", as Anthropic's papers investigating the internals of trained models show - there's a lot more decision-making going on than that.
majormajor•1w ago
> Complicated-enough LLMs also are aboslutely doing a lot more than "just trying to predict the next word", as Anthropic's papers investigating the internals of trained models show - there's a lot more decision-making going on than that.

Are there newer changes that are actually doing prediction of tokens out of order or such, or are this a case of immense internal model state tracking but still using it to drive the prediction of a next token, one at a time?

(Wrapped in a variety of tooling/prompts/meta-prompts to further shape what sorts of paragraphs are produced compared to ye olden days of the gpt3 chat completion api.)

JSR_FDED•1w ago
This is excellent intuition to have for how LLMs work, as well as understanding the implications of living in an LLM-powered world. Just as useful to adults as children.
narrator•1w ago
That lesson plan is a good practical start. I think it misses the very big picture of what we've created and the awesomeness of it.

The simplest explanation I can give is we have a machine that you feed it some text from the internet, and you turn the crank. Most machines we've had previously would stop getting better at predicting the next word after a few thousand cranks. You can crank the crank on an LLM 10^20 times and it will still get smarter. It will get so smart so quickly that no human can fit in their mind all the complexity of what it's built inside of it except through indirect methods but we know that it's getting smarter through benchmarks and some reasonably simple proofs that it can simulate any electronic circuit. We only understand how it works by the smallest increment of its intelligence improvement and through induction understanding that that should lead to further improvements in its intelligence.

mmooss•1w ago
> awesomeness

They should learn to think for themselves about the whole picture, not learn about 'awesomeness'.

whattheheckheck•1w ago
They stole everything from the internet and disregarded copyright laws lol
void-star•1w ago
Groan…
mystraline•6d ago
> You can crank the crank on an LLM 10^20 times and it will still get smarter.

No, it won't.

Training/learning is separate from the execution of the model. Takes megadollars to train and kilodollars to effectively run.

Its basically a really complicated PID loop. You can test and 'learn' the 3 neuron function, and then you can put it into execution. Can't do both.

Sure, theres context length and fine tuning to slightly alter a model.

But theres no adaptive, self-growing LLM. Probably won't be for a long time.

ulrashida•1w ago
Which module introduces students to some of the ethical and environmental aspects of using GenAI?
sdwr•1w ago
I think this is a backwards approach, especially for children.

Gen AI is magical, it makes stuff appear out of thin air!

And it's limited, everything it makes kinda looks the same

And it's forgetful, it doesn't remember what it just did

And it's dangerous! It can make things that never happened

Starting with theory might be the simplest way to explain, but it leaves out the hook. Why should they care?

superfluous_g•1w ago
Getting folks to care is essential in my experience. In coaching adults "what's in it for me?" is the end of my first section and forms the basis of their first prompt. Also how I cover risk - ie. "How do I not damage my credibility?". If you're asking people to break habits and processes, you've got to make them want to.

That said, the hands on approach here is great and also foundational in my experience.

jraph•6d ago
As a child, I think I would have been annoyed by such a presentation. There are science magazines for children that can explain pretty complex stuff just fine.

It's also critical not to leave out the ethical topics (resource consumption, e-waste production, concerns about how the source data is harvested - both how it DDoS websites and how authors are not necessarily happy with their work ending in the models)

westurner•6d ago
> Starting with theory might be the simplest way to explain,

Brilliant's AI course has step-by-step interactive textgen LLMs trained on TS (Swift) lyrics and terms of services with quizzes for comprehension and gamified points.

Here's a quick take:

LLM AI are really good at generating bytes that are similar to other bytes, but aren't yet very good at caring whether what they've generated is wrong or incorrect. Reinforcement Learning is one way to help prevent that.

AI Agents are built on LLMs. An LLM (Large Language Model) is a trained graph of token transition probabilities (a "Neural Network" (NN), a learning computer (Terminator (1984))). LLMs are graphical models. Clean your room. The grass is green and the sky is blue. Clean it well

AI Agents fail where LLMs fail at "accuracy" due to hallucinations even given human-curates training data.

There are lots of new methods for AI Agents built on LLMs which build on "Chain of Thought"; basically feeding the output from the model back through as an input a bunch of times. ("feed-forward")

But if you've ever heard a microphone that's too close to a speaker, you're already familiar with runaway feedback loops that need intervention.

There are not as many new Agentic AIs built on logical reasoning and inference. There are not as many AI Agents built on the Scientific Method that we know to be crucial to safety and QA in engineering.

TimorousBestie•1w ago
I wonder how utterly broken the next generation will be, having to grow up with all this stuff.

Us millennials didn’t do so great with native access to the internet, zoomers got wrecked growing up with social media. . . It’s rough.

augusteo•1w ago
The "learning through making" approach is really good. When I've explained LLMs to non-technical people, the breakthrough moment is usually when they see temperature in action. High temperature = creative but chaotic, low temperature = predictable but boring. You can't just describe that.

What I'd add: the lesson about hallucinations should come early, not just in the RAG module. Kids (and adults) need to internalize "confident-sounding doesn't mean correct" before they get too comfortable. The gap between fluency and accuracy is the thing that trips everyone up.

empressplay•1w ago
The vast majority of schools in North America don't allow teachers or students to download and run software on school computers (let alone AI models), so I don't entirely know who the audience is for this. I suppose home users? Maybe it's different in the UK.