frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•3m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•10m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•18m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•18m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
1•rolph•21m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•21m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•23m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•25m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•26m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•27m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•27m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•31m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•34m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
4•cratermoon•35m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•35m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•35m ago•1 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•39m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•41m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•42m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
2•hhs•44m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•44m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

5•Philpax•45m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•48m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•51m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
2•geox•53m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
3•EA-3167•53m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•55m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•57m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•57m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
3•RickJWagner•59m ago•0 comments
Open in hackernews

Local LLMs are how nerds now justify a big computer they don't need

https://world.hey.com/dhh/local-llms-are-how-nerds-now-justify-a-big-computer-they-don-t-need-af2fcb7b
6•janandonly•1mo ago

Comments

jqpabc123•1mo ago
I tend to use budget desktop machines --- particularly for testing but also for development. Does this mean I'm not a nerd?

One reason is I tend to make significant use of pre-compiled libraries so my build times tend to be reasonable.

And I also like the feedback from testing on a lower powered machine. If it runs well on a low end machine, better hardware is generally not a problem.

The reverse is often not the case. Software blunders can be completed masked with enough hardware.

bigyabai•1mo ago
Video games are how I justify a big computer I don't need. Local LLMs are how I amortize that spending.
demarq•1mo ago
Like to point out, Z image turbo puts out frontier quality images in a reasonable time on a local device.

But since it requires less than 16gb, the author is still right.

mindcrash•1mo ago
You don't need a "big computer" for local LLMs.

Every model with ~4B parameters runs perfectly fine on even a Geforce 1070 Mobile GPU with 8Gb of memory.

If you have some patience you can probably go a little crazy and run a model with ~27B parameters on a Radeon 890M with 32Gb of memory as well (means you'll probably have to get about 96Gb of system memory if you want to get some work done too, but oh well).

In theory you could even run a model which fits in 64Gb of video memory on that "little" GPU (with 128Gb of system memory).

No, you can't run something like Grok 2 (which has quantified models starting with 82Gb in size and going up) but why on earth would you ever want to run something like that locally?

gala8y•1mo ago
Can you list some useful things you can do with such models which are beyond 'fancy' use, like image generation, or standard chat (which is subpar compared to frontier)? I use my RTX4070 (12VRAM/64RAM) mostly for STT, though I am having real trouble to set up working environment for any Whisper derivatives after migrating to Fedora.
mindcrash•1mo ago
I'm heavily interested into things like UX, natural language interfaces and open source / libre computing environments.

One of the things I am currently experimenting with is building out my own agentic/assisted computing environment which instead of extending into Google/Microsoft/Apple owned cloud based services, extend into services which run on my homelab environment instead.

As a simple example: A local model which can hook into a MCP service making it understand calendars and appointments which hooks into my own locally hosted Radicale CalDAV service, enabling me to quickly make a appointment through text (or possibly even STT later). I'm curious how much I can get something like Thunderbird to disappear.

A somewhat advanced example: Another thing which recently popped up as a idea, I'm quite excited about and I hope will work out is that I can teach a model the concept of a "package repository", a "package manager" and "systems", which (hopefully) means I can install, uninstall, update and track the status of software packages on my Linux systems without using the terminal or shelling into a system myself.

Summarized: I think some things Big Tech wants are pretty neat, but I would like something without heavy involvement of Big Tech (and/or subscription based computing) instead.

gala8y•1mo ago
I understand first example. That's one of many, tiny, little things in the area of automation, say like 'advanced scripting'. Your second example is indeed advanced.

What I can see myself trying to do is some new ways of working with body of text notes. Local RAG for chatting with documents is also interesting.

And yes, with 'subscription based computing' shreds of privacy we had are gone.

seanmcdirmid•1mo ago
I used them to justify buying a beefy refurbished MacBook Pro M3 Max with 64GB. I haven’t really regretted it, and I found the extra power useful for dev and 3D printing tasks. You can make your own 3D models using ComfyUI and DrawThings, for example.