frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
1•CurtHagenlocher•1m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•2m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•2m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•2m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•3m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•5m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•7m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•12m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•13m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•13m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•17m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•19m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•20m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•27m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•28m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•33m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
9•mooreds•34m ago•2 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•35m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•36m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•41m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•43m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
2•saikatsg•43m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
2•aweussom•43m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•45m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•45m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•46m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•47m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•52m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
4•dragandj•53m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•54m ago•1 comments
Open in hackernews

Nvidia sells tiny new computer that puts big AI on your desktop

https://arstechnica.com/ai/2025/10/nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop/
24•turbocon•3mo ago

Comments

adam_patarino•3mo ago
If you would buy this I’d love to know how you’d use it.
antinomicus•3mo ago
Though the adage “this is the worst it’ll ever be” is parroted daily by AI cultists, the fact is it’s still yet to be proven that currently available LLMs can be made cost effective. For now every ai company is lighting tens of billions of dollars on fire every year and hoping better algorithms, hardware, and user lock in will ensure profits eventually. If this doesn’t happen, they will design more and more “features” in the LLM to monetize it - shopping, ads, sponsored replies, who knows? It may get really awful. And these companies will have so much of our data and eventually the need to make profits will lead them to sell that data and just generally try to extract as much out of us as they can.

This is why in the long run I believe we all should aspire to do LLM inference locally. But unfortunately we just are not anywhere close to par with the SoTA cloud models available. Something like DGX spark would be a decent step in this direction, but this platform appears to mostly be for prototyping / training models meant to eventually be run on data center nvidia hardware.

Personally I think I will probably spec out an M5 max/ultra Mac Studio once that’s a thing, and start trying to do this more seriously. The tools are getting better every day and “this is the worst it’ll ever be” is much more applicable to locally run models.

BizarroLand•3mo ago
I would use it for locally hosted RAG or whatever tech has supplanted it instead of paying API fees. We have ~20TB of documents that occasionally need to be scanned and chatted with and $4,000 one time (+ electricity) is chump change compared to the annual costs we would otherwise be looking at.
turbocon•3mo ago
I want to know if this is any different than all of the AMD AI Max PCs with 128gb of unified memory? The spec sheet say "128 GB LPDDR5x", so how is this better?

https://nvdam.widen.net/s/tlzm8smqjx/workstation-datasheet-d...

andsoitis•3mo ago
> AMD AI Max PCs with 128gb of unified memory? The spec sheet say "128 GB LPDDR5x", so how is this better?

Framework's AMD AI Max PCs also come with LPDDR5x-8000 memory: https://frame.work/desktop?tab=specs

Numerlor•3mo ago
The GPU is significantly faster and it has cuda, though I'm not sure where it'd fit in the market.

At the lower price points you have the AMD machines which are significantly cheaper, even though they're slower and with worse support. Then there's apple's with higher memory bandwidth and even the nvidia agx Thor is faster in GPU compute at the cost of worse CPU and networking, and at the 3-4K price point even a threadripper system becomes viable that can get significantly more memory

yencabulator•3mo ago
> The GPU is significantly faster and it has cuda,

But (non-batched) LLM processing is usually limited by memory bandwidth, isn't it? Any extra speed the GPU has is not used by current-day LLM inference.

Numerlor•3mo ago
I believe just inference is bandwidth limited, prompt processing and other tasks on the other hand needs the compute. As I understand it, the workstation is also as a whole focused on the local development process before readying things for the datacenters, not just running LLMs
BoredPositron•3mo ago
CUDA.
mcphage•3mo ago
That’s a tiny box that draws 240 watts… what does it use for cooling?
gradientsrneat•3mo ago
Interesting, but perhaps not surprising, that the OS is Ubuntu-based, with Nvidia software preinstalled.
BizarroLand•3mo ago
Given that it runs on ARM chips and is specifically designed for AI tasks, I would be more surprised to see it running Windows by default
hulitu•3mo ago
> Nvidia sells tiny new computer that puts big AI on your desktop

A bit expensive for 128 GB RAM. What can the CPU do ? Can it run flawlessly all svchost.exe instances in Windows 11 ? At this money, does it have a headphones output ?