frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•3m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•4m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•6m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•6m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•9m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•9m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•14m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•16m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•16m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•16m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•19m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•22m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•24m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•30m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•32m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•37m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•39m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•39m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•42m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•43m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•45m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•47m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•49m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•51m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•54m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•54m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•54m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•56m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

DeepSeek-v3.1-Terminus

https://api-docs.deepseek.com/news/news250922
101•meetpateltech•4mo ago

Comments

sbinnee•4mo ago
> What’s improved? Language consistency: fewer CN/EN mix-ups & no more random chars.

It's good that they made this improvement. But is there any advantages at this point using DeepSeek over Qwen?

IgorPartola•4mo ago
I wish there was some easy resource to keep up with the latest models. The best I have come up with so far is asking one model to research the others. Realistically I want to know latest versions, best use case, performance (in terms of speed) relative to some baseline, and hardware requirements to run it.
exe34•4mo ago
> asking one model to research the others.

that's basically choosing are random with extra steps!

throwup238•4mo ago
Research not spit out the answer based on weights. Just ask Gemini/Claude to do deep research on /r/LocalLLama and HN posts.
Jgoauh•4mo ago
have you tried https://artificialanalysis.ai/
JimDugan•4mo ago
Dumb collation of benchmarks that the big labs are essentially training on. Livebench.ai is the industry standard - non contaminated, new questions every few months.
IgorPartola•4mo ago
Thanks! Are the scores in some way linear here? As in, if model A is rated at 25 and model B at 50, does that mean I will have half the mistakes with model B? Get answers that are 2x more accurate? Or is it subjective?
esafak•4mo ago
I believe the score represents the fraction of correct answers, so yes.
alexeiz•4mo ago
It says the best "coding index" is held by Grok 4 and Gemini 2.5 Pro. Give me a break. Nobody uses those models for serious coding. It's dominated by Sonnet 4/Opus 4.1 and GPT-5.
__mharrison__•4mo ago
I use Aider heavily and find their benchmark to be pretty good. It is updated relatively frequently (a month ago, which may be an eternity in AI time).

https://aider.chat/docs/leaderboards/

comrade1234•4mo ago
MIT license that lets you run it on your own hardware and make money off of it.
coder543•4mo ago
Qwen3 models (including their 235B and 480B models) use the Apache-2.0 license, so it’s not like that’s a big difference here.
coder543•4mo ago
They seem fairly competitive with each other. You would have to benchmark them for your specific use case.
twotwotwo•4mo ago
The fast Cerebras thing got me to try the Qwen3 models. I couldn't get them working all that well: they had trouble using the required output format and following instructions. On the other hand, benchmarks say they should be great, and it sounds like maybe some people use them OK via different tools.

I'm curious if my experience was unusual (it very much could be!) and I'd be interested to hear from anyone who's used both.

yu3zhou4•4mo ago
I see no article in the link, just "news250922" header with some layout
meetpateltech•4mo ago
It’s up again, check it.

Twitter/X post link: https://twitter.com/deepseek_ai/status/1970117808035074215

Also Hugging Face model link: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus

bratao•4mo ago
The link is off. This link works https://api-docs.deepseek.com/updates#deepseek-v31-terminus
esafak•4mo ago
Notable performance improvement in agentic tool use: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus

The Deepseek provider may train on your prompts: https://openrouter.ai/deepseek/deepseek-v3.1-terminus

storus•4mo ago
I tried V3.1 but it was driving me crazy by ignoring parts of user input, which R1 never did. I had many such instances when e.g. asking about running DeepSeek 671B it instead picked DeepSeek 67B because 671B is too large to exist so I must have made a mistake etc. I concluded that despite being better in benchmarks than R1, it was essentially useless due to this characteristics and I instead started using R1 at OpenRouter. Not sure why deepseek.com removed R1 and left only V3.1 without any ability to switch back, I guess it's cheaper to run.
Grimblewald•4mo ago
Matches my experience in general as well. I find benchmarks largly useless for comparing current models. Many, despite improved metrics, are strictly worse than predecessors. What little gains they show in some areas, like agentic use here, are often set by far broader and often catastrophic losses.
binary132•4mo ago
sure would be neat if these companies would release models that could run on consumer hardware
edude03•4mo ago
So there are two ways to look at this - both hinge on how your define "consumer":

1) We haven't managed to distill models enough to get good enough performance to fit in the typical gaming desktop (say, 7B-24b class models). Even then though - most consumers don't have high end desktops, so even a 3060 class GPU requirement would exclude a lot of people.

2) Nothing is stopping you/anyone from buying 24ish 5090s (a consumer hardware product) to get the required ~600GB-1TB of VRAM to run unquantized deepseek except time/money/know how. Sure, it's unreasonably expensive but it's not like labs are conspiring to prevent people from running these models, it's just expensive for everyone and the common person doesn't have the funding to get into it.

regularfry•4mo ago
> 1) We haven't managed to distill models enough to get good enough performance to fit in the typical gaming desktop (say, 7B-24b class models).

That really depends on what "good enough" means. Qwen3-30b runs absolutely fine at q4 on a 24GB card, although that's also stretching "typical gaming desktop". It's competent as a code completion or aider-type coding agent model in that scenario.

But really we need both. Yes it would be nice to have things targeted to our own particular niche, but there are only so many labs cranking these things out. Small models will only get better from here.

__mharrison__•4mo ago
I'm using Qwen3Next on my MBP. It uses around 42GB of memory and, according to Aider benchmarks, has similar perf to GPT-4.1

https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Inst...

binary132•4mo ago
Just waiting on llama.cpp support :)

I usually use GPT-oss-120B with CPU MoE offloading. It writes at about 10tps, which is useful enough for the limited things I use it for. But I’m curious how Q3 Next will work (or whether I’ll be able to offload and run it with GPU acceleration at all.)

(4090)

qwertytyyuu•4mo ago
Simple as hot saturated really quickly huh, less than one year
twotwotwo•4mo ago
Interesting--I'd seen Chinese characters surprise inserted when it was just repeating back input with one provider, but not others. (I'd also occasionally seen tokens surprise-translated to Chinese.)

There's a GitHub bug about it that leads to more discussion here: https://github.com/deepseek-ai/DeepSeek-V3/issues/849

Good to see a fix and that it goes with some benchmark gains!

nojs•4mo ago
The language mixup thing seems to be an issue across all LLMs, as soon as you put some Chinese in the prompt they will often randomly respond in Chinese.

Also, given a partly Chinese prompt, Qwen will sometimes run its whole thinking trace in Chinese, which anecdotally seems to perform slightly worse for the same prompt versus an English thinking trace.