frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•4m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•6m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•6m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•7m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•9m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•9m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•10m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•10m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•15m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•17m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•17m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•19m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•20m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•20m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•22m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•23m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•23m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•23m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•25m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•27m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•27m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•28m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•29m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•29m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•33m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•33m ago•2 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•33m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•33m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•36m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•36m ago•0 comments
Open in hackernews

How much attention do you need, really? Experiments in O(1) latent reasoning

https://www.notion.so/Direct-Semantic-Reasoning-Unit-The-O-1-AI-Primitive-That-Reasons-In-Latent-Space-22fc65dfc8738069aa62e8b563b8e6b4?source=copy_link
2•orderone_ai•6mo ago

Comments

orderone_ai•6mo ago
Hello, fellow kids!

I want to share what I've been working on the last few weeks: O(1) inference across whole tasks through direct vector transformation. A few facts upfront to give you an idea of how it goes:

1. Implemented as part of a PoC of what I call the Promptable General Classifier (a classifier which can be prompted for general tasks, including (some, limited) reasoning tasks, and has inference-time hot swappable vocabulary/classes), and the 1.09B implementation:

    1. Runs 93x faster than Zephyr 7B (and this is being generous to Zephyr, as I had to add post-processing to extract labels from malformed LLM output, and I didn't count the time necessary to complete this post processing in the Zephyr's benchmarks

    2. Matches Zephyr 7B's batched accuracy across 13 tasks at 77.7% (the unbatched run with Zephyr gets one more correct, so it's 80%. The DSRU is much more deterministic, and it receives no accuracy boost from running unbatched). Note that I did prompt engineering on 2-3 of these to help the DSRU. The prompt engineering seemed to have no impact on Zephyr’s performance, which I’m assuming is due to its robustness as a professionally built LLM rather than a PoC of a new architecture made by a lone amateur researcher

    3. ~19x faster latency than Zephyr 7B
2. Separately trained on entailment tasks, and scored 80% (~2.66x better than chance) on a 3-label text entailment task (entails, contradicts, neutral), and 50% on a 3-label multiple choices entailment task ('1', '2', '3') - notes in the white paper on why the difference

3. The core model has an inference time at 1.09B of around 1ms per batch, but this is purely in post-attention latent space. This model has generalization capabilities, but lacks the full flexibility of an LLM. In exchange for giving that up, it gains extreme inference speeds, determinism, and extremely straightforward training with smooth loss landscapes. I was a bit hesitant to put this out so early, kept thinking about edge cases, ways I could add just a bit more rigor, etc, but I decided the perfect was the enemy of the good, and put together this white paper over the course of a couple of weekends with some midweek refinements.

I'll be releasing a full reference implementation of the training pipeline that can run on midrange consumer hardware with default settings on github in…I’m thinking 4 weeks, probably, depending on how busy I end up being - doing this with a day job has been...a lot, to say the least.

I’d release it now, but frankly, it’s an embarrassing ball of mud that I hacked my way do haphazardly while chasing positive signal. Now that I’ve gotten this far, I can implement it more thoughtfully - and try a new specific model architecture that I think will work a lot better for a lot of comparative reasoning tasks.

It is patent pending, but I'm permitting personal experimentation and thesis work without restriction. This includes grad students using it for their degrees! You can share results and discuss your work, but distribution of trained models or derivatives is not permitted. For funded research, institutional use, or anything commercial, usage is not permitted for now.

I hope you all find it interesting!