frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Codex for almost everything

https://openai.com/index/codex-for-almost-everything/
420•mikeevans•3h ago•217 comments

Claude Opus 4.7

https://www.anthropic.com/news/claude-opus-4-7
1140•meetpateltech•6h ago•850 comments

PCI Express over Fiber [video]

https://www.youtube.com/watch?v=XaDa9bBucEI
74•mmastrac•5d ago•19 comments

German Dog Commands

https://www.fluentu.com/blog/german/german-dog-commands/
19•rolph•1h ago•21 comments

Cloudflare's AI Platform: an inference layer designed for agents

https://blog.cloudflare.com/ai-platform/
192•nikitoci•7h ago•45 comments

TigerBeetle: A Trillion Transactions [video]

https://www.youtube.com/watch?v=y2_BqkKTbD8
32•adityaathalye•4d ago•12 comments

Qwen3.6-35B-A3B: Agentic coding power, now open to all

https://qwen.ai/blog?id=qwen3.6-35b-a3b
721•cmitsakis•6h ago•340 comments

Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7

https://simonwillison.net/2026/Apr/16/qwen-beats-opus/
99•simonw•2h ago•22 comments

Launch HN: Kampala (YC W26) – Reverse-Engineer Apps into APIs

https://www.zatanna.ai/kampala
49•alexblackwell_•5h ago•50 comments

Put your SSH keys in your TPM chip

https://raymii.org/s/tutorials/Put_your_SSH_keys_in_your_TPM_chip.html
67•type0•4d ago•69 comments

The future of everything is lies, I guess: Where do we go from here?

https://aphyr.com/posts/420-the-future-of-everything-is-lies-i-guess-where-do-we-go-from-here
390•aphyr•6h ago•400 comments

Circuit Transformations, Loop Fusion, and Inductive Proof

https://natetyoung.github.io/carry_save_fusion/
5•matt_d•3d ago•0 comments

Darkbloom – Private inference on idle Macs

https://darkbloom.dev
453•twapi•16h ago•220 comments

Artifacts: Versioned storage that speaks Git

https://blog.cloudflare.com/artifacts-git-for-agents-beta/
104•jgrahamc•7h ago•6 comments

Show HN: MacMind – A transformer neural network in HyperCard on a 1989 Macintosh

https://github.com/SeanFDZ/macmind
91•hammer32•7h ago•27 comments

IPv6 traffic crosses the 50% mark

https://www.google.com/intl/en/ipv6/statistics.html?yzh=28197
726•Aaronmacaron•1d ago•518 comments

We gave an AI a 3 year retail lease and asked it to make a profit

https://andonlabs.com/blog/andon-market-launch
160•lukaspetersson•5h ago•235 comments

Show HN: CodeBurn – Analyze Claude Code token usage by task

https://github.com/AgentSeal/codeburn
48•agentseal•2d ago•13 comments

The paper computer

https://jsomers.net/blog/the-paper-computer
256•jsomers•3d ago•81 comments

Six Characters

https://ajitem.com/blog/iron-core-part-2-six-characters/
63•Airplanepasta•3d ago•9 comments

FSF trying to contact Google about spammer sending 10k+ mails from Gmail account

https://daedal.io/@thomzane/116410863009847575
346•pabs3•16h ago•197 comments

Cloudflare Email Service

https://blog.cloudflare.com/email-for-agents/
348•jilles•7h ago•154 comments

Japan implements language proficiency requirements for certain visa applicants

https://www.japantimes.co.jp/news/2026/04/15/japan/society/jlpt-visa-requirement/
84•mikhael•3h ago•44 comments

Codex Hacked a Samsung TV

https://blog.calif.io/p/codex-hacked-a-samsung-tv
175•campuscodi•9h ago•104 comments

AI cybersecurity is not proof of work

https://antirez.com/news/163
169•surprisetalk•9h ago•74 comments

Modern Microprocessors – A 90-Minute Guide

https://www.lighterra.com/papers/modernmicroprocessors/
163•Flex247A•4d ago•19 comments

European civil servants are being forced off WhatsApp

https://www.politico.eu/article/european-civil-servants-new-messaging-services/
17•aa_is_op•47m ago•4 comments

ChatGPT for Excel

https://chatgpt.com/apps/spreadsheets/
311•armcat•23h ago•190 comments

PHP 8.6 Closure Optimizations

https://wiki.php.net/rfc/closure-optimizations
93•moebrowne•2d ago•21 comments

RamAIn (YC W26) Is Hiring

https://www.ycombinator.com/companies/ramain/jobs/bwtwd9W-founding-gtm-operations-lead
1•svee•13h ago
Open in hackernews

Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7

https://simonwillison.net/2026/Apr/16/qwen-beats-opus/
99•simonw•2h ago

Comments

ericpauley•1h ago
Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.

I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.

wongarsu•34m ago
Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version

But in terms of making something physically plausible, Opus certainly got a lot closer

kmacdough•20m ago
Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.
comandillos•1h ago
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
iib•1h ago
This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.
mentalgear•1h ago
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
simonw•51m ago
That's why I did the flamingo on a unicycle.

For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.

prodigycorp•40m ago
To me the opus flamingo is waaaay better than the qwen one. qwen has the better pelican, though.
dude250711•36m ago
Is a flamingo on a unicycle not merely a special case of a pelican on a bicycle?
furyofantares•20m ago
It is completely wild to me that you prefer Qwen's flamingo. I think it's really bad and Opus' is pretty good.
simonw•18m ago
The Opus one doesn't even have a bowtie.
akavel•15m ago
r/LocalLlama is now doing a horse in a racing car:

https://redd.it/1slz38i

jbellis•52m ago
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
__natty__•17m ago
You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.
ericd•15m ago
Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.
javawizard•6m ago
Not when the article they're commenting on was doing literally exactly the same thing.
19qUq•46m ago
How about switching to MechaStalin on a tricycle? It gets kind of boring.
mvanbaak•24m ago
boring ... the ways all the models fail at a simple task never gets boring to me
VHRanger•34m ago
That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
aliljet•34m ago
I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
smashed•26m ago
OpenCode?
lofaszvanitt•18m ago
That Qwen flamingo on the unicycle is actually quite good. A work of art.