frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•55s ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•3m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•3m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•6m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•17m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•23m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•27m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•36m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•43m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•46m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•47m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•47m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•48m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•48m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•49m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•54m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
4•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments
Open in hackernews

How to explain Generative AI in the classroom

https://dalelane.co.uk/blog/?p=5847
78•thinkingaboutit•1w ago

Comments

th0ma5•1w ago
This is really well put together, it should probably include more about ethics, hidden bias, etc.
Uehreka•1w ago
I couldn’t find it easily, what age range is this intended for? The images make it seem elementary-school-ish, but I’m not sure if elementary school kids have the foundations for interpreting scatterplots, let alone scatterplots with logarithmic axes. I’ve been out of education for a while though, so maybe I’m misremembering.
thfuran•1w ago
Logarithms are high school.
d1sxeyes•6d ago
Given the author’s domain is .co.uk, and there’s a reference to part of the UK at the bottom, I’d say this is likely aimed at an average Y10/11 (15-16 y.o.). It could perhaps be used with more able kids lower down the school, but I doubt it would be accessible to any under the age of 13.
peyton•1w ago
This is great and has lots of practical stuff.

Some of the takeaways feel over-reliant on implementation details that don’t capture intent. E.g. something like “the LLM is just trying to predict the next word” sort of has the explanatory power of “your computer works because it’s just using binary”—like yeah, sure, practically speaking yes—but that’s just the most efficient way to lay out their respective architectures and could conceivably be changed from under you.

I wonder if something like neural style transfer would work as a prelude. It helps me. Don’t know how you’d introduce it, but with NST you have two objectives—content loss and style loss—and can see pretty quickly visually how the model balances between the two and where it fails.

The bigger picture here is that people came up with a bunch of desirable properties they wanted to see and a way to automatically fulfill some of them some of the time by looking at lots of examples, and it’s why you get a text box that can write like Shakespeare but can’t tell you whether your grandma will be okay after she went to the hospital an hour ago.

crooked-v•1w ago
Complicated-enough LLMs also are aboslutely doing a lot more than "just trying to predict the next word", as Anthropic's papers investigating the internals of trained models show - there's a lot more decision-making going on than that.
majormajor•1w ago
> Complicated-enough LLMs also are aboslutely doing a lot more than "just trying to predict the next word", as Anthropic's papers investigating the internals of trained models show - there's a lot more decision-making going on than that.

Are there newer changes that are actually doing prediction of tokens out of order or such, or are this a case of immense internal model state tracking but still using it to drive the prediction of a next token, one at a time?

(Wrapped in a variety of tooling/prompts/meta-prompts to further shape what sorts of paragraphs are produced compared to ye olden days of the gpt3 chat completion api.)

JSR_FDED•1w ago
This is excellent intuition to have for how LLMs work, as well as understanding the implications of living in an LLM-powered world. Just as useful to adults as children.
narrator•1w ago
That lesson plan is a good practical start. I think it misses the very big picture of what we've created and the awesomeness of it.

The simplest explanation I can give is we have a machine that you feed it some text from the internet, and you turn the crank. Most machines we've had previously would stop getting better at predicting the next word after a few thousand cranks. You can crank the crank on an LLM 10^20 times and it will still get smarter. It will get so smart so quickly that no human can fit in their mind all the complexity of what it's built inside of it except through indirect methods but we know that it's getting smarter through benchmarks and some reasonably simple proofs that it can simulate any electronic circuit. We only understand how it works by the smallest increment of its intelligence improvement and through induction understanding that that should lead to further improvements in its intelligence.

mmooss•1w ago
> awesomeness

They should learn to think for themselves about the whole picture, not learn about 'awesomeness'.

whattheheckheck•6d ago
They stole everything from the internet and disregarded copyright laws lol
void-star•6d ago
Groan…
mystraline•6d ago
> You can crank the crank on an LLM 10^20 times and it will still get smarter.

No, it won't.

Training/learning is separate from the execution of the model. Takes megadollars to train and kilodollars to effectively run.

Its basically a really complicated PID loop. You can test and 'learn' the 3 neuron function, and then you can put it into execution. Can't do both.

Sure, theres context length and fine tuning to slightly alter a model.

But theres no adaptive, self-growing LLM. Probably won't be for a long time.

ulrashida•1w ago
Which module introduces students to some of the ethical and environmental aspects of using GenAI?
sdwr•1w ago
I think this is a backwards approach, especially for children.

Gen AI is magical, it makes stuff appear out of thin air!

And it's limited, everything it makes kinda looks the same

And it's forgetful, it doesn't remember what it just did

And it's dangerous! It can make things that never happened

Starting with theory might be the simplest way to explain, but it leaves out the hook. Why should they care?

superfluous_g•6d ago
Getting folks to care is essential in my experience. In coaching adults "what's in it for me?" is the end of my first section and forms the basis of their first prompt. Also how I cover risk - ie. "How do I not damage my credibility?". If you're asking people to break habits and processes, you've got to make them want to.

That said, the hands on approach here is great and also foundational in my experience.

jraph•6d ago
As a child, I think I would have been annoyed by such a presentation. There are science magazines for children that can explain pretty complex stuff just fine.

It's also critical not to leave out the ethical topics (resource consumption, e-waste production, concerns about how the source data is harvested - both how it DDoS websites and how authors are not necessarily happy with their work ending in the models)

westurner•6d ago
> Starting with theory might be the simplest way to explain,

Brilliant's AI course has step-by-step interactive textgen LLMs trained on TS (Swift) lyrics and terms of services with quizzes for comprehension and gamified points.

Here's a quick take:

LLM AI are really good at generating bytes that are similar to other bytes, but aren't yet very good at caring whether what they've generated is wrong or incorrect. Reinforcement Learning is one way to help prevent that.

AI Agents are built on LLMs. An LLM (Large Language Model) is a trained graph of token transition probabilities (a "Neural Network" (NN), a learning computer (Terminator (1984))). LLMs are graphical models. Clean your room. The grass is green and the sky is blue. Clean it well

AI Agents fail where LLMs fail at "accuracy" due to hallucinations even given human-curates training data.

There are lots of new methods for AI Agents built on LLMs which build on "Chain of Thought"; basically feeding the output from the model back through as an input a bunch of times. ("feed-forward")

But if you've ever heard a microphone that's too close to a speaker, you're already familiar with runaway feedback loops that need intervention.

There are not as many new Agentic AIs built on logical reasoning and inference. There are not as many AI Agents built on the Scientific Method that we know to be crucial to safety and QA in engineering.

TimorousBestie•1w ago
I wonder how utterly broken the next generation will be, having to grow up with all this stuff.

Us millennials didn’t do so great with native access to the internet, zoomers got wrecked growing up with social media. . . It’s rough.

augusteo•1w ago
The "learning through making" approach is really good. When I've explained LLMs to non-technical people, the breakthrough moment is usually when they see temperature in action. High temperature = creative but chaotic, low temperature = predictable but boring. You can't just describe that.

What I'd add: the lesson about hallucinations should come early, not just in the RAG module. Kids (and adults) need to internalize "confident-sounding doesn't mean correct" before they get too comfortable. The gap between fluency and accuracy is the thing that trips everyone up.

empressplay•6d ago
The vast majority of schools in North America don't allow teachers or students to download and run software on school computers (let alone AI models), so I don't entirely know who the audience is for this. I suppose home users? Maybe it's different in the UK.