frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•41s ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
1•bookofjoe•3m ago•1 comments

At Age 25, Wikipedia Refuses to Evolve

https://spectrum.ieee.org/wikipedia-at-25
1•asdefghyk•6m ago•2 comments

Show HN: ReviewReact – AI review responses inside Google Maps ($19/mo)

https://reviewreact.com
1•sara_builds•6m ago•0 comments

Why AlphaTensor Failed at 3x3 Matrix Multiplication: The Anchor Barrier

https://zenodo.org/records/18514533
1•DarenWatson•7m ago•0 comments

Ask HN: How much of your token use is fixing the bugs Claude Code causes?

1•laurex•11m ago•0 comments

Show HN: Agents – Sync MCP Configs Across Claude, Cursor, Codex Automatically

https://github.com/amtiYo/agents
1•amtiyo•12m ago•0 comments

Hello

1•otrebladih•13m ago•0 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
2•blacktulip•16m ago•0 comments

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•18m ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•19m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
2•gnufx•22m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•25m ago•0 comments

.72% Variance Lance

1•mav5431•26m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•28m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•28m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•29m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•30m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•31m ago•0 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
2•byandrev•31m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•32m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•32m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•33m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•35m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•35m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•37m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•37m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•42m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•42m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•43m ago•0 comments
Open in hackernews

Arm's Cortex A725 Ft. Dell's Pro Max with GB10

https://chipsandcheese.com/p/arms-cortex-a725-ft-dells-pro-max
61•pixelpoet•1w ago

Comments

crest•1w ago
I would love to see a comparison between the A725 and X925 cores.
geerlingguy•1w ago
Not quite in the same depth, but there are some more general benchmarks across all cores and latencies here: https://github.com/geerlingguy/sbc-reviews/issues/92
arjie•1w ago
Wow, this repo and the ai-benchmarks repo are the ones I wanted https://github.com/geerlingguy/ai-benchmarks/issues/34

Thank you for doing these. Earned a star and a watch from me on both! Minor sponsor donation as gratitude.

Would be sick to have an RSS feed for your data releases.

geerlingguy•1w ago
Will consider that at some point; a lot of the time is just spent getting the data, heh.
ksec•1w ago
Note to myself: Cortex X925 was originally called X5. The Current Generation X930 is now called C1-Ultra used in Mediatek 9500.
pinnochio•1w ago
Apologies for the tangent, but isn't this like saying "sliced tomato featuring BLT sandwich"?
trynumber9•1w ago
No. It's trying to analyze the CPU core but clarifies the device under test as that may have performance implications. There is cooling and possibly manufactured configured power limits.
pinnochio•1w ago
I get what they're doing. I've never seen that phrasing before.
cmrdporcupine•1w ago
This is awesome. I'm going to have to spend some time digging over this.

I got one of these GB10s, but the ASUS variety. So far fairly happy with it. Most days I don't remember I'm on ARM.

It's pretty performant, snappy, about the same speed as my other mini PC, a Ryzen 9 7940HS Minisforum UM 790 Pro, but with double the amount of cores and many times the amount of RAM.

storystarling•1w ago
Have you tried running any local LLMs via llama.cpp? I am curious if that high RAM is effectively usable as unified memory for larger models. I wonder if the memory bandwidth is sufficient to get decent performance on something like a 70b model or if it bottlenecks.
justaboutanyone•1w ago
You can run large-ish MoE model at good speeds, like gpt-oss-120b, it's snappy enough even with big context.

But large and dense at the same time is a bit slow.

Running a local LLM will be a load of money for something much slower than the api providers though.

storystarling•1w ago
Makes sense regarding the MoE performance. I am not sure the cost argument holds up for high volume workloads though. If you are running batch jobs 24/7 the hardware pays for itself in a few months compared to API opex. It really just comes down to utilization.
storystarling•1w ago
Do you have specific t/s numbers for those dense models? I'm curious just how severe the memory bandwidth bottleneck gets in practice.

I'm not sure I agree on the cost aspect though. For high-volume production workloads the API bills scale linearly and can get painful fast. If you can amortize the hardware over a year and keep the data local for privacy, the math often works out in favor of self-hosting.

justaboutanyone•1w ago
For Qwen2.5-72B-Instruct-Q5_K_M at 32k context, I fed it a 26k token file (truncated fiction novel) asking it to summarize, and it input processed at 224 tok/s and output generated at 3 tok/s. Not really good enough for interactive use without frustration. Not just from watching it reply, but also the long wait for it to actually read the book.

On the same hardware gpt-oss-120b at 128k context, I fed it a longer version of the input (a whole novel, 97k tok), and it input processed at 1650 tok/s and output generated at 27 tok/s. Just fast enough IMO

cmrdporcupine•1w ago
I bought it primarily so I could learn some of the toolchain for fine-tuning / training stuff, not so much for running inference, which its only "ok" at.

If I was primarily interested in that, I would have probably bought one of the cheaper Strix Halo machines.

It's also just a decent non-Mac ARM64 workstation, with large quantities of RAM. Which in 2026 is a bit of unicorn.