frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Private Inference

https://confer.to/blog/2026/01/private-inference/
1•jbegley•3m ago•0 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•6m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•10m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
1•PaulHoule•11m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
1•y1n0•13m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•13m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•14m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•15m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•19m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•21m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•24m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•25m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•27m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
3•bundie•32m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•33m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•37m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•37m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•38m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•43m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•50m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•58m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•58m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•1h ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•1h ago•3 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•1h ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•1h ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•1h ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•1h ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
4•rolph•1h ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•1h ago•0 comments
Open in hackernews

Show HN: I cross-compiled llama.cpp to run on Windows XP

https://okt.ai/2025/11/30/running-llms-on-windows-xp-a-weekend-of-controlled-madness/
2•dandinu•2mo ago
Had a dumb thought: what if someone in 2003 could run a local LLM on their machine? XP desktop, rolling hills wallpaper, maybe Winamp in the corner—and you just chat with an AI locally.

I saw there were some attempts on Reddit, so I tried it myself.

Cross-compiled llama.cpp from macOS targeting Windows XP 64-bit. Main hurdles: downgrading cpp-httplib to v0.15.3 (newer versions explicitly block pre-Win8), replacing SRWLOCK/CONDITION_VARIABLE with XP-compatible threading primitives, and the usual DLL hell.

Qwen 2.5-0.5B runs at ~2-8 tokens/sec on period-appropriate hardware. Not fast, but it works.

Video demoand build instructions in the write-up.

Claude helped with most of the debugging on the build system. I just provided the questionable life choices.

Comments

vintagedave•2mo ago
Really shows what could be achieved back then -- and in a sense, how little the OS versions we have today add.

Challenge: could you build for 32-bit? From memory, few people used XP64, it was one of the Server editions, and Vista and Windows 7, when people started migrating.

dandinu•2mo ago
That's pretty accurate. I'm always amazed how much we move forward with technology, just to later realize we already had it 15 years ago.

regarding your question:

I have a 32bit XP version as well, and I actually started with that one.

The problem I was facing was that it's naturally limited to 4GB RAM, out of which only 3.1GB are usable (I wanted to run some beefier models and 64bit does not have the RAM limit).

Also, the 32bit OS kept freezing at random times, which was a very authentic Windows XP experience, now that I think about it. :)

vintagedave•2mo ago
> out of which only 3.1GB are usable

That would be a real issue. I vaguely recall methods to work around this - various mappings, some Intel extension for high memory addressing, etc: https://learn.microsoft.com/en-us/windows/win32/memory/addre...

Maybe unrealistic :( I doubt this is drop-in code.

dandinu•2mo ago
So the deal with AWE (Address Windowing Extensions) is that it lets 32-bit apps access memory above 4GB by essentially doing manual page mapping. You allocate physical pages, then map/unmap them into your 32-bit address space as needed. It's like having a tiny window you keep sliding around over a bigger picture.

The problemis that llama.cpp would need to be substantially rewritten to use it. We're talking:

  cAllocateUserPhysicalPages()
  MapUserPhysicalPages()
  // do your tensor ops on this chunk
  UnmapUserPhysicalPages()
  // slide the window, repeat
You'd basically be implementing your own memory manager that swaps chunks of the model weights in and out of your addressable space. It's not impossible, but it's a pretty gnarly undertaking for what amounts to "running AI on a museum piece."
vintagedave•2mo ago
> Eventually found it via a GitHub thread for LegacyUpdate.

Can you share that link in the blog? This is the equivalent of one of those forums posts, 'never mind, solved it.' It's helpful to share what you learned for those coming later :)

dandinu•2mo ago
there is a full technical write-up in the Github repo in "WINDOWS_XP.md": https://github.com/dandinu/llama.cpp/blob/master/WINDOWS_XP....

Sorry for failing to mention that.

Link to vcredist thread: https://github.com/LegacyUpdate/LegacyUpdate/issues/352

vintagedave•2mo ago
Cool, some person-and-or-AI in future may be able to find it now :D
vintagedave•2mo ago
> XP-era hardware doesn’t have AVX. Probably doesn’t have AVX2 or FMA either. But SSE4.2 is safe for most 64-bit CPUs from 2008 onward:

It won't; FMA is available from AVX2-era onwards. If you target 32-bit, you'd only be "safe" with SSE2... if you really want a challenge, you'd use the Pentium Pro CPU feature set, ie the FPU.

I have to admit I'd be really curious what that looked like! You'd definitely want to use the fast math option.

This is an awesome effort, btw, and I enjoyed reading your blog immensely.

dandinu•2mo ago
Oh darn, you're absolutely right (pun intended) about the 32-bit situation. SSE2 is really the "floor" there if you want any kind of reasonable compatibility. I was being a bit optimistic with SSE4.2 even for 64-bit - technically safe for most chips from that era but definitely not all.

The Pentium Pro challenge though... pure x87 FPU inference? That would be gloriously cursed. You'd basically be doing matrix math like it's 1995. `-mfpmath=387` and pray.

I'm genuinely tempted to try this now. The build flags would be something like:

  -DGGML_AVX=OFF -DGGML_AVX2=OFF -DGGML_FMA=OFF \
  -DGGML_F16C=OFF -DGGML_SSE42=OFF -DGGML_SSSE3=OFF \
  -DGGML_SSE3=OFF -DGGML_SSE2=OFF  # pain begins here
And then adding `-ffast-math` to `CMAKE_C_FLAGS` because at that point, who cares about IEEE 754 compliance, we're running a transformer on hardware that predates Google.

If someone actually has a Pentium Pro lying around and wants to see Qwen-0.5B running on it... that would be the ultimate read for me as well.

Thanks for the kind words. Always fun to find fellow retro computing degenerates in the wild.