frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•2m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
1•timpera•3m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•5m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
1•jandrewrogers•5m ago•0 comments

Peacock. A New Programming Language

1•hashhooshy•10m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•11m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•15m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•16m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•16m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•17m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•18m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•19m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•sleazylice•19m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•20m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•21m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•22m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•23m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•23m ago•0 comments

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
2•rhcm•26m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•27m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
3•samizdis•31m ago•1 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•32m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•33m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•36m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•37m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•39m ago•2 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
2•walterbell•42m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•43m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
2•_august•44m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
15•martialg•44m ago•1 comments
Open in hackernews

Ask HN: Who's running local AI workstations in 2026?

9•Blue_Cosma•4w ago
After three years working on private LLM infrastructure, I still can't pin down who and how big the market is.

The ecosystem has matured: DGX Spark, high-end Mac Studios, AMD Strix Halo, upcoming DGX Station. Models are getting smaller and more efficient. Inference engines (llama.cpp, vLLM, SGLang) and frontends (Ollama, LMStudio, Jan) have made local deployment accessible. Yet I keep meeting more people researching this than actually deploying it.

For those running local inference: - What's your setup and use case? - Is it personal or shared across a team? - What's the real driver — privacy, regulation, latency, cost, tinkering?

I'm skeptical on cost arguments (cloud inference scales better, plus API subsidies, for now at least!), but curious if I'm missing something.

What would make local AI actually worth it for you?

Comments

01092026•4w ago
You asked us...well, first tell us what's your real driver? You have three years on local infrastructure? What does that even mean - you're running Ollama Llama_70b for 3 years?

Whats your stack?

And none of that hardware can run larger models, smaller tiny ones, or highly quantized versions of larger ones sure. Or do you have something important to say?

Blue_Cosma•4w ago
Our main driver and hypothesis was to work with regulated industry. We worked with a few large enterprise clients in defence and industry for R&D and IP use cases mostly.

Our stack changes per project, adapting to client needs and infra: Llama 70B on a Mac Studio M1 with Ollama in 2024, vLLM on 4xH100 private cloud for larger deployments. Most recently, we've been working on a custom workstation with 2x RTX PRO 6000 Blackwell Max-Q + 1.1TB DDR5 to run larger models locally using SGLang and KTransformers.

The question isn't rhetorical, I'm trying to understand if the demand we see in regulated sectors is the whole market or if there's broader adoption I'm missing.

01092026•4w ago
Cool, so you are basically doing local onsite deployments? The H100's are nice. I'm not that rich, so I have some 4xV100 32GB SXM2....server, dual socket - it's OK for inference. You can get when with V100s, RAM, etc for $10-$12k all in used stuff.

I run largest models I can, DeepSeek, adding a few more soon. The fact that I can have a premier high end model run locally is main interest, a 70B model is pointless unless it's a specific task based special model or whatever Text to speech, etc.

I am more interested in ditching Nvidia for AMD Chips+GPUs, but not even ROCm - just run with OpenGL / Vulkan weights in shaders. Faster, more control, better performance for MY architecture, etc. This is the goal.

I don't think many people are running models, maybe outside of a company? I guess you are company/industry focused, I am just a programmer / personal.

People don't see a need I guess? It's complicated. Well - actually it's NOT if you have lots of money to buy all the right stuff, brand new, etc.

For regular guys like me, we have to be creative to get shit to run in the best way, it's all we can afford.

andy99•4w ago
Just bought a Strix Halo (framework desktop), waffled a long time between that and a Mac Studo but I got tired of waiting for the M5 and don’t really like Apple.

I work with ML professionally, almost all in cloud, I just wanted something “off grid” and unmetered, and needed a computer anyway so decided to pay a bit more and get the one I want. It’s “personal” in that it’s exclusively for me, but I have a business and bought it for that.

Still figuring out the best software, so far it looks like llama.cpp with Vulcan though I have a lot of experimenting to do and don’t currently find it optimal for what I want.

01092026•4w ago
Well, Mac chips are badass for training / inference - super underrated. I mean, I've literally run epochs on cloud Nvidia GPU Servers...compared to running them locally (M chip) - and look, not trying to burn any houses down but...eh...Apple does really really well.

The good news for you, you can chain like a bunch / couple of them together and run the largest open source models around. But extremely expensive route - but probably the easiest and smoothest way.

If you're planning on running this on Apple - you can do some stuff with Metal directly...in PyTorch it's 'mcu' if I remember?

I think your llama.cpp route is good - I wouldn't go the Ollama route - I mean great to start, but IMHO: get the models directly, learn the layers and how the heads work as best as you can, make an effort to understand what's going on - well you don't have to, but, I think the models appreciate the effort - respect goes far.

Blue_Cosma•4w ago
Thanks a lot for sharing. Haven't tested Strix Halo myself. Did you consider DGX Spark as well?

What is your target use case? Curious what feels suboptimal about llama.cpp + Vulkan so far.

andy99•4w ago
Re DGX, I’m mostly interested in local inference, it might have been nice to try but it was more expensive for similar performance (or so I think).

I do lots of different experiments, synthetic data generation along the lines of Magpie is one of the things I wanted a local machine for, as well as just general access to a decent sized LLM to try different things, without having to spin up a cloud machine each time.

I would prefer PyTorch / HF transformers to llama.cpp as I fine the latter less flexible if I want to change anything.

delaminator•4w ago
I have a 3090 24Gb Twin Xenon 64Gb RAM sat on a machine in our server room.

I do local AI with Qwen, Whisper and another I can't remember right now.

These are all QWEN:

We do AI Invoice OCR - PDF -> Image -> Excel. Works much better than other solutions because it has invoice context so it looks for particular data to extract and ignores others. Why local? I proved it worked, no need to send our data outside for processing and it works,

We deal with photos of food packaging - I do a "photograph ingredients list and check them against our expected ingredients" - downside is it takes 2 mins per photo, I might actually push this one outside.

Ingredients classifier - is it animal (if so what species), vegetarian, vegan, halal, kosher, alcoholic, is nut based, peanuts and more - simply no need to send it outside.

I've got a Linux chatbot helper on the "test this" pile with Qwen Coder - not evaluated it but the idea will be "type command, get it wrong, ask Qwen for the answer" - I use Claude for this but it seems a bit heavy weight and I'm curious.

tbh some of it is solution hunting - we spent $1000 on the kit to evaluate if it was worth it so I try and get some value out of it.

But it is slow, 3 hours for a recent task that took Claude API 2 minutes.

My favourite use is Whisper. I voice->text almost all of my typing now.

I've also bought a Nvidia Orin Nano but I haven't set it up yet - I want to run Whisper in the car to take voice dictation as I drive.