frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My AI skeptic friends are all nuts

https://fly.io/blog/youre-all-nuts/
822•tabletcorry•5h ago•1134 comments

Ask HN: Who is hiring? (June 2025)

273•whoishiring•11h ago•259 comments

Conformance checking at MongoDB: Testing that our code matches our TLA+ specs

https://www.mongodb.com/blog/post/engineering/conformance-checking-at-mongodb-testing-our-code-matches-our-tla-specs
51•todsacerdoti•4h ago•20 comments

Show HN: I build one absurd web project every month

https://absurd.website
139•absurdwebsite•6h ago•29 comments

Show HN: A toy version of Wireshark (student project)

https://github.com/lixiasky/vanta
191•lixiasky•11h ago•64 comments

Show HN: Kan.bn – An open-source alterative to Trello

https://github.com/kanbn/kan
353•henryball•17h ago•162 comments

Teaching Program Verification in Dafny at Amazon (2023)

https://dafny.org/blog/2023/12/15/teaching-program-verification-in-dafny-at-amazon/
21•Jtsummers•4h ago•4 comments

Ask HN: How do I learn practical electronic repair?

37•juanse•2d ago•32 comments

How to post when no one is reading

https://www.jeetmehta.com/posts/thrive-in-obscurity
511•j4mehta•22h ago•228 comments

Japanese Scientists Develop Artificial Blood Compatible with All Blood Types

https://www.tokyoweekender.com/entertainment/tech-trends/japanese-scientists-develop-artificial-blood/
104•Geekette•5h ago•25 comments

Show HN: Onlook – Open-source, visual-first Cursor for designers

https://github.com/onlook-dev/onlook
329•hoakiet98•4d ago•74 comments

CVE 2025 31200

https://blog.noahhw.dev/posts/cve-2025-31200/
93•todsacerdoti•7h ago•23 comments

ThorVG: Super Lightweight Vector Graphics Engine

https://www.thorvg.org/about
100•elcritch•16h ago•22 comments

Typing 118 WPM broke my brain in the right ways

http://balaji-amg.surge.sh/blog/typing-118-wpm-brain-rewiring
103•b0a04gl•7h ago•145 comments

Show HN: Penny-1.7B Irish Penny Journal style transfer

https://huggingface.co/dleemiller/Penny-1.7B
128•deepsquirrelnet•11h ago•71 comments

Arcol simplifies building design with browser-based modeling

https://www.arcol.io/
45•joeld42•10h ago•24 comments

Snowflake to buy Crunchy Data for $250M

https://www.wsj.com/articles/snowflake-to-buy-crunchy-data-for-250-million-233543ab
119•mfiguiere•6h ago•49 comments

Younger generations less likely to have dementia, study suggests

https://www.theguardian.com/society/2025/jun/02/younger-generations-less-likely-dementia-study
69•robaato•11h ago•59 comments

Ask HN: Who wants to be hired? (June 2025)

99•whoishiring•11h ago•248 comments

Ask HN: How do I learn robotics in 2025?

288•srijansriv•13h ago•82 comments

I made a chair

https://milofultz.com/2025-05-27-i-made-a-chair.html
328•surprisetalk•2d ago•125 comments

The Princeton INTERCAL Compiler's source code

https://esoteric.codes/blog/published-for-the-first-time-the-original-intercal72-compiler-code
131•surprisetalk•1d ago•36 comments

Mesh Edge Construction

https://maxliani.wordpress.com/2025/03/01/mesh-edge-construction/
38•atomlib•11h ago•1 comments

Piramidal (YC W24) Is Hiring a Senior Full Stack Engineer

https://www.ycombinator.com/companies/piramidal/jobs/1a1PgE9-senior-full-stack-engineer
1•dsacellarius•9h ago

A Hidden Weakness

https://serge-sans-paille.github.io/pythran-stories/a-hidden-weakness.html
29•serge-ss-paille•12h ago•1 comments

If you are useful, it doesn't mean you are valued

https://betterthanrandom.substack.com/p/if-you-are-useful-it-doesnt-mean
746•weltview•17h ago•333 comments

Intelligent Agent Technology: Open Sesame! (1993)

https://blog.gingerbeardman.com/2025/05/31/intelligent-agent-technology-open-sesame-1993/
40•msephton•2d ago•3 comments

Can I stop drone delivery companies flying over my property?

https://www.rte.ie/brainstorm/2025/0602/1481005-drone-delivery-companies-property-legal-rights-airspace/
88•austinallegro•7h ago•185 comments

TradeExpert, a trading framework that employs Mixture of Expert LLMs

https://arxiv.org/abs/2411.00782
106•wertyk•16h ago•99 comments

Reducing Cargo target directory size with -Zno-embed-metadata

https://kobzol.github.io/rust/rustc/2025/06/02/reduce-cargo-target-dir-size-with-z-no-embed-metadata.html
48•todsacerdoti•13h ago•13 comments
Open in hackernews

Beyond Attention: Toward Machines with Intrinsic Higher Mental States

https://arxiv.org/abs/2505.06257
66•holografix•1d ago

Comments

quinnjh•1d ago
This is, intuitively, a really exciting title. Looking forward to reading / seeing similar work.
bwest87•1d ago
I did a chat with Gemini about the paper, and tldr is... * They introduce a loop at the beginning between Q, K, and V vectors (theoretically representing "question", "clues" and "hypothesis" of thinking) * This loop contains a non linearity (ReLU) * The loop is used to "pre select" relevant info * They then feed that into a light weight attention mechanism.

They claim OOM faster learning, and robustness acro domains. There's enough detail to probably do your own PuTorch implementation, though they haven't released code. The paper has been accepted into AMLDS2025. So peer reviewed.

At first blush, this sounds really exciting and if results hold up and are replicated, it could be huge.

saagarjha•1d ago
I don't want to dismiss this outright but I'm skimming this paper and pretty skeptical of something that's from a single guy that doesn't appear peer reviewed, spends most of its time talking about actual biology, comes up with a "RELU6" (RELU but minimum value 6), and then pushes detailed review to a future paper.
amelius•1d ago
He wrote this paper, "Cooperation is All You Need", with a group of people:

https://arxiv.org/pdf/2305.10449

And this paper in an IEEE journal:

https://arxiv.org/pdf/2211.01950

yorwba•1d ago
Figure 3 B in "Cooperation is All You Need" shows the same score curves as the top left of Figure 6 in "Beyond Attention," so it must be basically the same implementation. Yet that earlier paper is only cited once, in the Acknowledgements section. As far as I can tell, the only mathematical change in this paper is capping the ReLU at 6. But it also adds a bunch of grandiose verbiage ("triadic modulation loops", "awake thought.")

The author is clearly a crackpot. Maybe he wasn't a crackpot when he still managed to publish in peer-reviewed journals, but cognitive decline over time is not exactly unheard of.

frozenseven•1d ago
Cool insults. But perhaps you can explain why he's wrong?
anothermathbozo•1d ago
Warrantless and totally spiteful for you to make unqualified claims like “cognitive decline” from skimming two papers. This is shameful.
habinero•1d ago
I swear, most of the AI "papers" that get posted here are someone screwing around with ChatGPT on ketamine and deciding they're advancing humanity.
ivape•1d ago
You’ve just discovered the future of a jobless economy. Please write a blog post and I will surely upvote you.

Ketamine is all you need

geeunits•1d ago
Sat here vibe coding a pure assembly kernel for arm64, APL layer with conceptual memory layout. On my bed, eating a bag of chips, jobless since Jan. Everything but the Ket are mine
ivape•1d ago
You serious?
geeunits•1d ago
yasqueen ← {'yes'≡⎕C ⍵}
TeMPOraL•1d ago
Who knows, but drop the word "vibe" and this is basically the startup culture 15 years ago, so ¯\_(ツ)_/¯.

Well, okay, for better historical accuracy, replace APL with API, and the kernel for arm64 thing with Ruby on Rails on a new Macbook, but the point still stands.

ldng•1d ago
Can the anthropomorphic scam continue unchecked ? Apparently yes.
ImHereToVote•1d ago
If modeling cognitive processes is a scam, then neuroscience must be the longest-running con in history.
TeMPOraL•1d ago
Probably as long as non-anthropomorphic idiocy can.

No opinion on this submission, but a more general point. I'm not the one to jump into anthropomorphizing computers, but last year or two of LLM and adjacent research is a constant stream of papers and experiments that totally surprise everyone who refuse to even entertain comparisons between LLMs and people, while being entirely expected and completely not surprising to those who do.

mirekrusin•1d ago
Results in this paper look way too good, I guess we'll have to wait for peer reviews and replications to see if it's true.
RockyMcNuts•1d ago
When you stack transformers, don't you get meta-attention and higher mental states?
edflsafoiewq•1d ago
I don't understand the "Triadic Modulation Loop" block, does anyone else?

Also

> Competing interests: AA has a provisional patent application for the algorithm used in this paper.