frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•3m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•4m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
1•zdw•4m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
1•bookofjoe•4m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•5m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•6m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•7m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•7m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
1•funnycoding•7m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•8m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•8m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•9m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•9m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•10m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•14m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•14m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•15m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•15m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•17m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•17m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•18m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•18m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•19m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•19m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•19m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•20m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•22m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•22m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•28m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•29m ago•0 comments
Open in hackernews

Continuous Autoregressive Language Models

https://arxiv.org/abs/2510.27688
115•Anon84•3mo ago

Comments

mentalgear•2mo ago
Very interesting. Also I find these training parameters quite elegant:

- Diversity: This term encourages the model to generate a diverse set of samples, preventing mode collapse. - Fidelity: This term rewards the model for making predictions that are close to the ground-truth

I'm wondering if a continuos next-vector generative approach also increase innate "reasoning" capabilities of the model, since it could potentially capture more of the semantics of the data vs just tokens.

barrenko•2mo ago
And may be even more adapted to sorts of RL finetuning?
mike_hearn•2mo ago
They say this technique isn't compatible yet with RL because you can't adjust the logits. So no GRPO I guess, which is going to be the biggest issue. An LLM with no RL applied isn't going to be that useful.
suddenlybananas•2mo ago
The technique of compressing tokens down reminds me a bit of byte latent transformers
killerstorm•2mo ago
Would be interesting to combine it with Reasoning In the Latent Space: feed the vector from the output layer of transformer back to input.

Obviously, you can't do it in pre-training. But you can add it later as an optional 'extra' vector, I think. E.g. `input_embedding + MLP(prev_output) * alpha`. Alpha is zero during pre-training.

vessenes•2mo ago
I like this plan, but don't you already have this from the input vector in the prompt, at least if the inference is 'chunk wise' - generating a latent space vector, decoding it, outputting it, doing the next one.

What if you trained a separate thinking phase using the auto encoder, though? Might be more efficient, and then you've got it using neuralese internally.

Actually, reading the (summary) paper - they tried your idea and had trouble with it for a different reason:

   > Once the generative head predicts the next vector , a natural next step would be to feed it directly as input to the Transformer for predicting . However, we found that the model struggles to unpack the semantic information from such a compact representation. Instead, we ground the autoregressive process back in the more structured discrete space, where the predicted  is passed through the autoencoder to reconstruct the K tokens.
Gormanu•2mo ago
If this works, we’re looking at the next structural shift in LLMs — and all the “bigger model = better” business might finally face a serious challenger. But — and you knew there’d be a “but” — if the reconstruction fails in edge-cases, or the continuous space hides weird failure modes, then this could backfire and produce models that look efficient but feel brittle.

Still — props to the team for going after the real root of inefficiency, not just piling on more layers. If nothing else, this is one to watch if you care about scaling models smarter.

notrealyme123•2mo ago
Congratulations for the authors, but damit, there goes a good idea ^^
vatsachak•2mo ago
K being fixed here seems like it will eventually be done away with

When I'm thinking about math proofs, sometimes I can have a single idea which can be unfolded into a hundred lines of proof

Maybe I'm getting the wrong analogy here, but if vectors = ideas then K should depend on the vector

mike_hearn•2mo ago
If they can reinvent RL so it works with this then I guess the big labs will be all over it, as ~halving inference costs would be huge (especially if Ed Zitron's leaked OpenAI inf costs are accurate). Potentially the difference between inferencing being profitable and loss making. It's an elegant approach.

I also wonder how far they can push K if other aspects are tweaked. The approach of just doubling each parameter each time leaves a lot of space between the chosen value and the next value known to not work.