frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
186•ColinWright•1h ago•172 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
22•valyala•2h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
17•valyala•2h ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
833•klaussilveira•22h ago•250 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
155•alephnerd•2h ago•106 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
119•1vuio0pswjnm7•8h ago•149 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1061•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
80•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•58m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
489•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•73 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•259 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
226•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
40•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
10•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•33 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•82 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
275•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
288•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
557•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments
Open in hackernews

I have written gemma3 inference in pure C

https://github.com/robitec97/gemma3.c
65•robitec97•1w ago

Comments

w4yai•1w ago
> It proves that modern LLMs can run without Python, PyTorch, or GPUs.

Did we need any proof of that ?

skybrian•1w ago
Knowing the performance is interesting. Apparently it's 1-3 tokens/second.
kgeist•1w ago
ikllama.cpp is a fork of llama.cpp which specializes on CPU inference, some benchmarks from 1 year ago: https://github.com/ikawrakow/ik_llama.cpp/discussions/164
jasonjmcghee•1w ago
I guess llama.cpp isn't quite as popular as I had assumed.
avadodin•1w ago
llama.cpp being the best choice doesn't make it popular.

When I got started, I was led to ollama and other local-llm freemium.

I didn't necessarily assume that they weren't c++(I don't even know) but I do think that –as implied– Python duct-tape solutions are more popular than llama.cpp.

tolerance•1w ago
I imagine so regarding GPUs, right? Is this is a legitimate project then doesn’t it provide a proof of concept for performance constraints that relate to them? Couldn't the environmentally concerned take this as an indicator that the technology can progress without relying on as much energy is potentially spent now? Shouldn’t researchers in the industry be thinking of ways to prevent the future capabilities of the technology from outrunning the capacity of the infrastructure?

I know very little about AI but these are things that come to mind here for me.

yorwba•1w ago
GPUs are more efficient than CPUs for LLM inference, using less energy per token and being cheaper overall. Yes, a single data center GPU draws a lot of power and costs a fortune, but it can also serve a lot more people in the time your CPU or consumer GPU needs to respond to a single prompt.
tolerance•1w ago
I got you, thanks!
jdefr89•1w ago
Python and PyTorch all call out to C libraries… I don’t get what he means by “proving LLMs can run without Python and PyTorch” at all. Seems like they don’t understand basic fundamentals about things here…
christianqchung•1w ago
A bizarre claim like that would be what happens when you let an LLM write the README without reading it first.
austinvhuang•1w ago
My first implementation of gemma.cpp was kind of like this.

There's such a massive performance differential vs. SIMD though that I learned to appreciate SIMD (via highway) as one sweet spot of low-dependency portability that sits between C loops and the messy world of GPUs + their fat tree of dependencies.

If anyone want to learn the basics - whip out your favorite LLM pair programmer and ask it to help you study the kernels in the ops/ library of gemma.cpp:

https://github.com/google/gemma.cpp/tree/main/ops

janwas•1w ago
:D Your code was nicely written and it was a pleasure to port to SIMD because it was already very data-parallel.
behnamoh•1w ago
but why tho? next gemma is coming and no one uses gemma 3 in prod anyway.
NitpickLawyer•1w ago
> no one uses gemma 3 in prod anyway.

Umm, we do. It's still one of the best for eu countries support / help chatbot style. It's got good (best?) multilingual support ootb, it's very "safe" (won't swear, won't display chinese characters, etc) and it's pretty fast.

behnamoh•1w ago
but it lacks system prompt support.
NitpickLawyer•1w ago
It lacks a deducated system prompt, but it was trained with and in practice works with the system prompt be the first message from the user.
gunalx•1w ago
Yep. Before gemma3 we where struggling with multilinguality on smaller European languages, and it is still one of the batter ones in that regard (even large open or closed models struggle with this to some extent). Gemma3 also is still pretty decent multi modal wise.
avadodin•1w ago
I didn't know this was a thing until I read this thread but I can confirm that it does fine(not perfect by any means just like the average casual non-native fluent speaker) and it is one of the reasons I use it as my local model.
uncognic•1w ago
I think /* */ single-line comments is a pretty good indication.
data-ottawa•1w ago
Gemma3 is probably the best supported fine tunable model.
austinvhuang•1w ago
I don't have firsthand knowledge, but r/SesameAI seems to believe Maya/Miles products are based on a Gemma3 backbone.
rao-v•1w ago
I'm really charmed by this project (I know there are a few like it).

In particular it's got a single ~600 line file (https://github.com/robitec97/gemma3.c/blob/main/gemma3_kerne...) with a clear straightforward implementation of every major function used in inferencing (google's models) from gelu to rope.

I'm curious how many more functions you'd need to add to have full coverage of every publically available LLM innovation (e.g. QK-Norm from Qwen3, SwiGLU etc.).

Obviously llama.cpp has a much bigger library but it's lovely to see everything in one clean file.

pacman1337•1w ago
Anyone using this model for something useful? For now I only have use cases for top performing models...