frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Getting Started with Jax for ML

https://docs.jaxstack.ai/en/latest/getting_started.html
1•danboarder•45s ago•0 comments

Laravel Multidomain Package for Multi-Tenancy [video]

https://www.youtube.com/watch?v=6kMM072Wxq0
1•unripe_syntax•53s ago•0 comments

Figma-S-1

https://www.sec.gov/Archives/edgar/data/1579878/000162828025033742/figma-sx1.htm
1•cik•1m ago•0 comments

Redue: It can be unscrambled to form 18 valid words

https://wordsdescrambler.com/unscramble/redue/
1•gogo61•2m ago•0 comments

AI Background Removal and Change – Aichangebackground

https://aichangebackground.com
1•kewuku•4m ago•0 comments

Ask HN: How do you use your work computer in times of Crowdsrtrike?

1•trail477•4m ago•0 comments

The Criminal Enterprise Run by Monkeys

https://www.wsj.com/lifestyle/monkeys-thieves-bali-temple-0b63a432
1•Ozarkian•8m ago•0 comments

Supersized stick insect discovered in high-altitude trees in Australia

https://www.theguardian.com/environment/2025/jul/31/big-stick-insect-acrophylla-alta-found-north-queensland-trees
1•cpach•11m ago•1 comments

I built my blog with C preprocessor macros

https://wheybags.com/blog/macroblog.html
1•r4um•13m ago•0 comments

Decoding Zuck's Superintelligence Memo

https://om.co/2025/07/30/decoding-zucks-superintelligence-memo/
3•tosh•17m ago•0 comments

Startup? More Like Standup

1•cesargstn•22m ago•0 comments

How to Bypass Yandex Smart Captcha Easily? Find Out Now

https://whoerip.com/blog/how-to-bypass-yandex-smart-captcha/
2•denis_kkk•25m ago•0 comments

Teach AI your name through someone it trusts

https://lauradecastro.substack.com/p/teach-ai-your-name-through-someone
1•larub_•26m ago•0 comments

Windows 7 God Mode

https://learn.microsoft.com/en-us/answers/questions/2447533/windows-7-god-mode
2•picture•31m ago•0 comments

EES digital border checks: start date officially confirmed

https://www.connexionfrance.com/news/ees-digital-border-checks-start-date-officially-confirmed/737190
2•taubek•32m ago•0 comments

Pi-hole – Compromised Donor Emails: A post-mortem

https://pi-hole.net/blog/2025/07/30/compromised-donor-emails-a-post-mortem/
3•Mossy9•34m ago•0 comments

First NHS AI-run physio clinic in England halves back-pain waiting list

https://www.theguardian.com/society/2025/jul/31/nhs-first-ai-run-physio-clinic-in-england-halves-back-pain-waiting-list
1•NomDePlum•36m ago•3 comments

Files Are Living Rent-Free in Someone's Cloud Forever (and That's Weird)

https://medium.com/@jenni_emeka/your-files-are-living-rent-free-in-someones-cloud-forever-and-that-s-weird-315899277e81
1•tonycletus•37m ago•0 comments

Delta's AI-based price-gouging

https://pluralistic.net/2025/07/30/efficiency-washing/#medallion-clubbed
2•ColinWright•37m ago•0 comments

Psychologists simulate ghosting–and reveal why it's so damaging

https://www.psypost.org/psychologists-simulate-ghosting-and-reveal-why-its-so-damaging/
3•lentoutcry•38m ago•0 comments

Build Your Own Minisforum N5 Inspired Mini NAS: A Comprehensive Guide

https://jackharvest.com/index.php/2025/07/27/build-your-own-minisforum-n5-inspired-mini-nas-a-comprehensive-guide/
1•Bogdanp•38m ago•0 comments

Meta brought AI to rural Colombia. Now students are failing exams

https://restofworld.org/2025/colombia-meta-ai-education/
1•imartin2k•45m ago•0 comments

Takotsubo Cardiomyopathy

https://en.wikipedia.org/wiki/Takotsubo_cardiomyopathy
1•thunderbong•48m ago•0 comments

Confirmed that OpenRouter's new stealth model originates from OpenAI

https://old.reddit.com/r/RooCode/comments/1mduo94/confirmed_that_openrouters_new_stealth_model/
1•handfuloflight•50m ago•0 comments

Formal Inertia

https://daedeluskite.com/2025/07/31/formal-inertia/
1•asplake•51m ago•0 comments

Vibe Coding but not what you think

https://amritpandey.io/vibe-coding-but-not-what-you-think/
2•hardasspunk•52m ago•0 comments

Customer guidance for SharePoint vulnerability CVE-2025-53770

https://msrc.microsoft.com/blog/2025/07/customer-guidance-for-sharepoint-vulnerability-cve-2025-53770/
2•taubek•52m ago•0 comments

Areweloongyet.com – Tracking Software Support for Loongson's LoongArch ISA

https://areweloongyet.com/
1•uneven9434•52m ago•0 comments

Has AI coding gone too far? I feel like I'm losing control of my own projects

3•Shaun0•54m ago•0 comments

New Hidden State of Matter Could Make Computers 1,000x Faster

https://www.popularmechanics.com/science/a65531679/hidden-metallic-state/
1•Bluestein•55m ago•0 comments
Open in hackernews

Qwen3 30B-A3B

https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507
80•tosh•21h ago

Comments

syntaxing•19h ago
It’s interesting how the Qwen team more or less proved that hybrid reasoning doesn’t work and makes things worse. The fact that this model is almost on par with the bigger model in non thinking mode (old, they released a non hybrid model recently) is crazy.
rdos•19h ago
Qwen3 32B is a hybrid reasoning model and is very good. You have to generate a lot of think tokens for any agentic activity but you will probably run the model locally and it wont be a problem. If you need something quick and simple, /no_think is good enough in my experience. It might also be because its not a moe architecture
simonw•19h ago
Qwen3 32B was a hybrid model that came out in April, but these new Qwen July models have all ditched the hybrid mechanism and are either thinking or non-thinking.
littlestymaar•18h ago
By Qwen3-32B you mean the first released version from late April? I don't think Qwen3-32B-2507 has been released yet.

I agree with GP that since Qwen is now releasing updated Qwen3 version without hybrid reasoning, and experience a significant performance boost in the process, it likely means that the hybrid reasoning experiment was a failure.

varispeed•19h ago
Isn't that because all "reasoning" approaches are very much fake? The model cannot internalise the concepts it has to reason about. For instance if you ask it why water feels wet, it is unable to grasp the concept of feeling and sensation of wetness, but will for sure "decompress" learned knowledge of people talking how it is to feel the water.
simonw•19h ago
Everything about LLMs is fake. The "reasoning" trick is still demonstrably useful - the benchmarks consistently show models using that trick performing better at harder code challenges, for example.
ffsm8•16h ago
I'd argue that what's generally considered "reasoning" isn't actually rooted in understanding either. It's just the process you apply to get to a conclusion

expressed more abstractly: is about drawing logical connections between points and extrapolating from them.

To quote the definition: "the action of thinking about something in a logical, sensible way."

I believe it's rooted in mathematics, not physics. That's probably why there is such a focus on the process instead of the result

tosh•19h ago
This is basically a GPT-4 level model that runs (quantized) on a 32gb ram laptop.

Yes it doesn't recall facts from training material as well but with tool use (e.g. wikipedia lookup) that's not a problem and even preferable to a larger model.

anyg•18h ago
>basically a GPT-4 level model

Can you share more insights on this? Going by @simonw's testing, the quantized model doesn't seem close to GPT-4 level.

simonw•17h ago
I think calling it "GPT-4 level" is justified if we are talking about original GPT-4 from March 2023.
andygeorge•15h ago
in my limited testing, qwen3:30b-a3b-instruct-2507-q4_K_M is fast but far less accurate/helpful than gemma3:27b-it-q4_K_M
simonw•19h ago
You can try it here: https://chat.qwen.ai/?model=Qwen3-30B-A3B-2507

I got a cute pelican out of it (with a smile!) https://simonwillison.net/2025/Jul/29/qwen3-30b-a3b-instruct...

I ran a version of it on my Mac using https://huggingface.co/lmstudio-community/Qwen3-30B-A3B-Inst... - it uses 30GB of RAM so probably needs 48GB for comfort.

juujian•18h ago
Do we know the knowledge cutoff date for Qwen?
jwr•18h ago
Can't wait for it to be available in ollama so that I can run my spam filtering benchmarks against it. qwen3:30b-a3b-q4_K_M was very good, and only bested by gemma3:27b-it-qat for spam filtering. But gemma3 is much slower. Looking forward to trying this!
jasonjmcghee•17h ago
The new models have been available for 18 hours.

https://ollama.com/library/qwen3:30b

pkroll•12h ago
As jasonjmcghee says, they're available... but if you go to ollama.com and set models to "newest" you'll see Mistral (specifically mistral-small3.2 at this writing) because they seem to not sort the models based on newest update: only newest "group" or however you'd phrase it. So you need to scroll down to "qwen3" to see it's been updated.

Slightly frustrating. But good to know.

bertili•18h ago
This thing fly on Macbook M4 Max 128GB at over 100t/s, for small contexts, over 20t/s for large contexts. MLX 4bit quant.
nico•16h ago
Is it good at using tools?

It would be nice having a fast local model that is good at using tools

syntaxing•15h ago
All Qwen models are good at using tools, even the smaller 4B one. The 1.7B one gets confused easily
nico•14h ago
Thank you

Have you tried using them with something like Claude code or aider?

syntaxing•8h ago
I’ve used it with Aider (32B and 30B, the previous 30B one, haven’t tried this fully nonthinking one yet) and 4B with home assistant. Both works great in terms of tool calling.
menaerus•32m ago
Like what type of tasks/tools are we talking about here, asking questions about the content from (PDF) documents or?
revskill•17h ago
It can solve rubik cube
simonw•16h ago
... and they just released another model, this time https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507 - the reasoning equivalent of Qwen3-30B-A3B-Instruct-2507

My notes (pelican and space invaders included) here: https://simonwillison.net/2025/Jul/30/qwen3-30b-a3b-thinking...

This is the 5th model from Qwen in 9 days!

Qwen3-235B-A22B-Instruct-2507 - 21st July

Qwen3-Coder-480B-A35B-Instruct - 22nd July

Qwen3-235B-A22B-Thinking-2507 - 25th July

Qwen3-30B-A3B-Instruct-2507 - 29th July

Qwen3-30B-A3B-Thinking-2507 - today

anon373839•9h ago
This model is truly the best for local document processing. It’s super fast, very smart, has a low hallucination rate, and has great long context performance (up to 256k tokens). The speed makes it a legitimate replacement for those closed, proprietary APIs that hoard your data.