frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•2m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•5m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
3•chwtutha•5m ago•0 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
2•osnium123•6m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
1•jeremy_su•8m ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•10m ago•0 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•16m ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•18m ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•29m ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
2•thread_id•30m ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•31m ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•34m ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
1•paladin314159•34m ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•36m ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•36m ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•37m ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
2•medbar•38m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•39m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•39m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•39m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•42m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•45m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•47m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•51m ago•1 comments

Ask HN: The Coming Class War

2•fud101•51m ago•4 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•53m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
2•petethomas•54m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•54m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•58m ago•1 comments
Open in hackernews

How much attention do you need, really? Experiments in O(1) latent reasoning

https://www.notion.so/Direct-Semantic-Reasoning-Unit-The-O-1-AI-Primitive-That-Reasons-In-Latent-Space-22fc65dfc8738069aa62e8b563b8e6b4?source=copy_link
2•orderone_ai•6mo ago

Comments

orderone_ai•6mo ago
Hello, fellow kids!

I want to share what I've been working on the last few weeks: O(1) inference across whole tasks through direct vector transformation. A few facts upfront to give you an idea of how it goes:

1. Implemented as part of a PoC of what I call the Promptable General Classifier (a classifier which can be prompted for general tasks, including (some, limited) reasoning tasks, and has inference-time hot swappable vocabulary/classes), and the 1.09B implementation:

    1. Runs 93x faster than Zephyr 7B (and this is being generous to Zephyr, as I had to add post-processing to extract labels from malformed LLM output, and I didn't count the time necessary to complete this post processing in the Zephyr's benchmarks

    2. Matches Zephyr 7B's batched accuracy across 13 tasks at 77.7% (the unbatched run with Zephyr gets one more correct, so it's 80%. The DSRU is much more deterministic, and it receives no accuracy boost from running unbatched). Note that I did prompt engineering on 2-3 of these to help the DSRU. The prompt engineering seemed to have no impact on Zephyr’s performance, which I’m assuming is due to its robustness as a professionally built LLM rather than a PoC of a new architecture made by a lone amateur researcher

    3. ~19x faster latency than Zephyr 7B
2. Separately trained on entailment tasks, and scored 80% (~2.66x better than chance) on a 3-label text entailment task (entails, contradicts, neutral), and 50% on a 3-label multiple choices entailment task ('1', '2', '3') - notes in the white paper on why the difference

3. The core model has an inference time at 1.09B of around 1ms per batch, but this is purely in post-attention latent space. This model has generalization capabilities, but lacks the full flexibility of an LLM. In exchange for giving that up, it gains extreme inference speeds, determinism, and extremely straightforward training with smooth loss landscapes. I was a bit hesitant to put this out so early, kept thinking about edge cases, ways I could add just a bit more rigor, etc, but I decided the perfect was the enemy of the good, and put together this white paper over the course of a couple of weekends with some midweek refinements.

I'll be releasing a full reference implementation of the training pipeline that can run on midrange consumer hardware with default settings on github in…I’m thinking 4 weeks, probably, depending on how busy I end up being - doing this with a day job has been...a lot, to say the least.

I’d release it now, but frankly, it’s an embarrassing ball of mud that I hacked my way do haphazardly while chasing positive signal. Now that I’ve gotten this far, I can implement it more thoughtfully - and try a new specific model architecture that I think will work a lot better for a lot of comparative reasoning tasks.

It is patent pending, but I'm permitting personal experimentation and thesis work without restriction. This includes grad students using it for their degrees! You can share results and discuss your work, but distribution of trained models or derivatives is not permitted. For funded research, institutional use, or anything commercial, usage is not permitted for now.

I hope you all find it interesting!