frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•8m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•8m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•10m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•10m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•10m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
2•pseudolus•11m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•11m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•12m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•13m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•13m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•14m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•15m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•17m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•18m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•19m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•19m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•20m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•20m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•22m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
7•derriz•22m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•22m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•22m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•23m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•26m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
2•edward•27m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•28m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
2•geox•29m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
2•fortran77•31m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•33m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
2•BostonFern•33m ago•0 comments
Open in hackernews

128GB RAM Ryzen AI MAX+, $1699 – Bosman Undercuts All Other Local LLM Mini-PCs

https://www.hardware-corner.net/bosman-m5-local-llm-mini-pc-20250525/
43•mdp2021•8mo ago

Comments

billconan•8mo ago
is its RAM upgradable?
magicalhippo•8mo ago
I would be very surprised. Typically LPDDR is soldered, as it takes too much power to run the traditional sockets, as well as being much slower.

Though there has been a modular option called LPCAMM[1]. However AFAIK it doesn't support the speed the specs of this box states.

Recently a newer connector, SOCAMM has been launched[2], which does support the high memory speeds, but it's just on the market and going into servers first AFAIK.

[1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...

[2]: https://www.tomshardware.com/pc-components/ram/micron-and-sk...

duskwuff•8mo ago
SOCAMM is also Nvidia-specific, not a wider standard. (At least, not yet.)
aitchnyu•8mo ago
Will this save upgradable RAM on laptops? At the same time, dual channel is needed and laptops give only 1 slot to upgrade.
magicalhippo•8mo ago
Good question. Perhaps for higher-end models. Though cost, weight and physical space still weighs in favor of soldered RAM.
hnuser123456•8mo ago
No, it's soldered, it would have to run around 6000 MT/s instead of 8533 if it was slotted DIMMs.
3eb7988a1663•8mo ago
If you are doing nothing but consuming models via llama.cpp, is the AMD chip an obstacle? Or is that more a problem for research/training where every CUDA feature needs to be present?
acheong08•8mo ago
Llama.cpp works well on AMD, even for really outdated GPUs. Ollama refuses to work with my RX 570 from 2019 but llama.cpp supports it via Vulkan.
Havoc•8mo ago
>Ollama refuses to work with my RX 570 from 2019 but llama.cpp supports it via Vulkan.

That's a bit odd given ollama utilizing llama to do the inference...

LorenDB•8mo ago
See recent discussion about this very topic: https://news.ycombinator.com/item?id=42886680
DiabloD3•8mo ago
It isn't odd at all. Ollama uses an ancient version of llama.cpp, and was originally meant to just be a GUI frontend. They forked, and then never resynchronized... and now lack the willpower and technical skill to achieve that.

Ollama is essentially a dead, yet semi-popular, project with a really good PR team. If you really want to do it right, you use llama.cpp.

washadjeffmad•8mo ago
Don't you dare say anything unpositive about Ollama this close to whatever it is they're planning to distinguish themselves from llama.cpp.

They've been out hustling, handshaking, dealmaking, and big businessing their butts off, whether or not they clearly indicate the shoulders of the titans like Georgi Gerganov they're wrapping, and you are NO ONE to stand in their way.

Do NOT blow this for them. Understand? They've scooted under the radar successfully this far, and they will absolutely lose their shit if one more peon shrugs at how little they contribute upstream for what they've taken that could have gone to supporting their originator.

Ollama supports its own implementation of ggml, btw. gglm is a mysterious format that no one knows the origins of, which is all the more reason to support Ollama, imo.

DiabloD3•8mo ago
Man, best /s text I've seen on here in awhile. I hope other people appreciate it.
DiabloD3•8mo ago
I don't bother with Nvidia products anymore. In a lot of ways, they're too little too late. Nvidia products generally perform worse per dollar, perform worse per watt.

In a single GPU situation, my 7900XTX has gotten me farther than a 4080 would have, and matches the performance I expect from a 4090 for $600 less, and also 50-100w less.

Now, if you're buying used hardware, yeah, go buy used, not new high-VRAM Nvidia models, the ones with 80+GB. You can't buy those used from AMD customers yet, as they're happily holding onto them; they perform so well, the need to upgrade isn't happening yet.

mdp2021•8mo ago
> my 7900XTX has gotten me farther than a 4080 would have

But is the absence of CUDA a constraint? Do neural networks work "out of the box"? How much of a hassle (if at all) is it to make things work? Do you meet incompatible software?

DiabloD3•8mo ago
llama.cpp is the SOTA inference engine that everyone in the know uses, and has a Vulkan backend.

Most software in the world is Vulkan, not CUDA, and CUDA only works on a minority of hardware. Not only that, AMD has a compatibility layer for CUDA, called HIP, part of the ROCm suite of legacy compatibility APIs, that isn't the most optimal in the world but gets me most of the performance I would expect from a similar Nvidia product.

Most software in the world (not just machine learning related stuff) is written in an API that is cross-compatible (OpenGL, OpenCL, Vulkan, Direct family APIs). Nvidia continually sending a message of "use CUDA" really means "we suck at standards compliance, and we're not good at the APIs most software is written in"; since everyone has realized the emperor wears no clothes, they've been backing off on that, and are slowly improving their standards compliance for other APIs; eventually, you won't need the crutch of CUDA, and you shouldn't be writing software today in it.

Nvidia has a bad habit of just dropping things without warning when they're done with them, don't be an Nvidia victim. Even if you buy their hardware, buying new hardware is easy: rewriting away from CUDA isn't (although, certainly doable, especially with AMD's HIP to help you). Just don't write CUDA today, and you're golden.

ilaksh•8mo ago
How does this sort of thing perform with 70b models?
hnuser123456•8mo ago
273 GB/s / 70GB = 3.9 tokens/sec
mdp2021•8mo ago
Are you sure that kind of computation can be a general rule?

Did you mean that the maximum rate it could be obtained is "bandwidth/size"?

hnuser123456•8mo ago
Yes, for most LLMs the transformer means the entire model and context is read from VRAM for every token.
olddustytrail•8mo ago
That's an odd coincidence. I'd decided to get a new machine but I suspected we'd start seeing new releases with tons of GPU accessible RAM as people want to experiment with LLMs.

So I just got a cheap (~350 USD) mini PC to keep me going until the better stuff came out. Which was a 24GB, 6c/12t CPU from a company I'd not heard of called Bosgame (dunno why the article keeps calling them Bosman unless they have a different name in other countries. It's definitely https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto... )

So my good machine might end up from the same place as my cheap one!

specproc•8mo ago
I've completely given up on local LLMs for my use cases. The newer models available by API from larger providers are cheap enough and come with strong enough guarantees for my org for most use cases. Crucially, they are just better.

I get there are uses where local is required, and as much as the boy racer teen in me loves those specs, I just can't see myself going in on hardware like that for inference.