frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Digital Independence Day

https://di.day/
1•pabs3•2m ago•0 comments

What a bot hacking attempt looks like: SQL injections galore

https://old.reddit.com/r/vibecoding/comments/1qz3a7y/what_a_bot_hacking_attempt_looks_like_i_set_up/
1•cryptoz•3m ago•0 comments

Show HN: FlashMesh – An encrypted file mesh across Google Drive and Dropbox

https://flashmesh.netlify.app
1•Elevanix•4m ago•0 comments

Show HN: AgentLens – Open-source observability and audit trail for AI agents

https://github.com/amitpaz1/agentlens
1•amit_paz•4m ago•0 comments

Show HN: ShipClaw – Deploy OpenClaw to the Cloud in One Click

https://shipclaw.app
1•sunpy•7m ago•0 comments

Unlock the Power of Real-Time Google Trends Visit: Www.daily-Trending.org

https://daily-trending.org
1•azamsayeedit•9m ago•1 comments

Explanation of British Class System

https://www.youtube.com/watch?v=Ob1zWfnXI70
1•lifeisstillgood•10m ago•0 comments

Show HN: Jwtpeek – minimal, user-friendly JWT inspector in Go

https://github.com/alesr/jwtpeek
1•alesrdev•13m ago•0 comments

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
1•todsacerdoti•14m ago•0 comments

Feedback on a client-side, privacy-first PDF editor I built

https://pdffreeeditor.com/
1•Maaz-Sohail•18m ago•0 comments

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•25m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
5•quentin101010•30m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•32m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•34m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•35m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•36m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•37m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•38m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
1•gtsnexp•41m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•43m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
2•xipz•44m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•48m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•50m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•52m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•52m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•56m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•56m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
2•jingkai_he•59m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

2•swimmingkiim•1h ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•1h ago•0 comments
Open in hackernews

Show HN: I cross-compiled llama.cpp to run on Windows XP

https://okt.ai/2025/11/30/running-llms-on-windows-xp-a-weekend-of-controlled-madness/
2•dandinu•2mo ago
Had a dumb thought: what if someone in 2003 could run a local LLM on their machine? XP desktop, rolling hills wallpaper, maybe Winamp in the corner—and you just chat with an AI locally.

I saw there were some attempts on Reddit, so I tried it myself.

Cross-compiled llama.cpp from macOS targeting Windows XP 64-bit. Main hurdles: downgrading cpp-httplib to v0.15.3 (newer versions explicitly block pre-Win8), replacing SRWLOCK/CONDITION_VARIABLE with XP-compatible threading primitives, and the usual DLL hell.

Qwen 2.5-0.5B runs at ~2-8 tokens/sec on period-appropriate hardware. Not fast, but it works.

Video demoand build instructions in the write-up.

Claude helped with most of the debugging on the build system. I just provided the questionable life choices.

Comments

vintagedave•2mo ago
Really shows what could be achieved back then -- and in a sense, how little the OS versions we have today add.

Challenge: could you build for 32-bit? From memory, few people used XP64, it was one of the Server editions, and Vista and Windows 7, when people started migrating.

dandinu•2mo ago
That's pretty accurate. I'm always amazed how much we move forward with technology, just to later realize we already had it 15 years ago.

regarding your question:

I have a 32bit XP version as well, and I actually started with that one.

The problem I was facing was that it's naturally limited to 4GB RAM, out of which only 3.1GB are usable (I wanted to run some beefier models and 64bit does not have the RAM limit).

Also, the 32bit OS kept freezing at random times, which was a very authentic Windows XP experience, now that I think about it. :)

vintagedave•2mo ago
> out of which only 3.1GB are usable

That would be a real issue. I vaguely recall methods to work around this - various mappings, some Intel extension for high memory addressing, etc: https://learn.microsoft.com/en-us/windows/win32/memory/addre...

Maybe unrealistic :( I doubt this is drop-in code.

dandinu•2mo ago
So the deal with AWE (Address Windowing Extensions) is that it lets 32-bit apps access memory above 4GB by essentially doing manual page mapping. You allocate physical pages, then map/unmap them into your 32-bit address space as needed. It's like having a tiny window you keep sliding around over a bigger picture.

The problemis that llama.cpp would need to be substantially rewritten to use it. We're talking:

  cAllocateUserPhysicalPages()
  MapUserPhysicalPages()
  // do your tensor ops on this chunk
  UnmapUserPhysicalPages()
  // slide the window, repeat
You'd basically be implementing your own memory manager that swaps chunks of the model weights in and out of your addressable space. It's not impossible, but it's a pretty gnarly undertaking for what amounts to "running AI on a museum piece."
vintagedave•2mo ago
> Eventually found it via a GitHub thread for LegacyUpdate.

Can you share that link in the blog? This is the equivalent of one of those forums posts, 'never mind, solved it.' It's helpful to share what you learned for those coming later :)

dandinu•2mo ago
there is a full technical write-up in the Github repo in "WINDOWS_XP.md": https://github.com/dandinu/llama.cpp/blob/master/WINDOWS_XP....

Sorry for failing to mention that.

Link to vcredist thread: https://github.com/LegacyUpdate/LegacyUpdate/issues/352

vintagedave•2mo ago
Cool, some person-and-or-AI in future may be able to find it now :D
vintagedave•2mo ago
> XP-era hardware doesn’t have AVX. Probably doesn’t have AVX2 or FMA either. But SSE4.2 is safe for most 64-bit CPUs from 2008 onward:

It won't; FMA is available from AVX2-era onwards. If you target 32-bit, you'd only be "safe" with SSE2... if you really want a challenge, you'd use the Pentium Pro CPU feature set, ie the FPU.

I have to admit I'd be really curious what that looked like! You'd definitely want to use the fast math option.

This is an awesome effort, btw, and I enjoyed reading your blog immensely.

dandinu•2mo ago
Oh darn, you're absolutely right (pun intended) about the 32-bit situation. SSE2 is really the "floor" there if you want any kind of reasonable compatibility. I was being a bit optimistic with SSE4.2 even for 64-bit - technically safe for most chips from that era but definitely not all.

The Pentium Pro challenge though... pure x87 FPU inference? That would be gloriously cursed. You'd basically be doing matrix math like it's 1995. `-mfpmath=387` and pray.

I'm genuinely tempted to try this now. The build flags would be something like:

  -DGGML_AVX=OFF -DGGML_AVX2=OFF -DGGML_FMA=OFF \
  -DGGML_F16C=OFF -DGGML_SSE42=OFF -DGGML_SSSE3=OFF \
  -DGGML_SSE3=OFF -DGGML_SSE2=OFF  # pain begins here
And then adding `-ffast-math` to `CMAKE_C_FLAGS` because at that point, who cares about IEEE 754 compliance, we're running a transformer on hardware that predates Google.

If someone actually has a Pentium Pro lying around and wants to see Qwen-0.5B running on it... that would be the ultimate read for me as well.

Thanks for the kind words. Always fun to find fellow retro computing degenerates in the wild.