frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

EM-LLM: Human-Inspired Episodic Memory for Infinite Context LLMs

https://github.com/em-llm/EM-LLM-model
67•jbotz•4d ago

Comments

MacsHeadroom•3d ago
So, infinite context length by making it compute bound instead of memory bound. Curious how much longer this takes to run and when it makes sense to use vs RAG.
mountainriver•3d ago
TTT, cannon layers, and titans seem like a stronger approach IMO.

Information needs to be compressed into latent space or it becomes computationally intractable

searchguy•16h ago
do you have references to

> TTT, cannon layers, and titans

vessenes•4h ago
is titans replicated? I feel like lucidrains couldn't replicate.
logicchains•58m ago
I think something like Titans explains Gemini's excellent long context performance. That would explain why the Titan team hasn't released the training code or hyperpameters used even though they said in the paper that they would, and why soon after that it came out that DeepMind would be holding off publishing new results for 6 months to avoid giving away competitive advantages.
p_v_doom•4h ago
Interesting. Before there even was attention I was thinking that the episodic memory model offers something that could be very useful for neural nets, so its cool to see people testing that
killerstorm•3h ago
Note that this works within a single sequence of tokens. It might be consistent with "episodic memory" metaphor if we consider a particular transformer run as its experience.

But this might be very different from what people expect from "memory" - i.e. ability to learn vast amounts of information and retrieve it as necessary.

This is more like a refinement of transformer attention: instead of running attention over all tokens (which is very expensive as it's quadratic), it selects a subset of token spans and runs fine-grained attention only on those. So it essentially breaks transformer attention into two parts - coarse-grained (k-NN over token spans) and fine-grained (normal).

It might be a great thing for long-context situations. But it doesn't make sense when you want millions of different facts to be considered - making them into long context is rather inefficient.

yorwba•2h ago
It would be inefficient if you had to do it from scratch for every query, but if you can do it once as a preprocessing step and reuse the prepared context for many queries, it might start to become more efficient than a shorter context that includes only some documents but has to be reprocessed because it's different every time.
killerstorm•1h ago
Yes, I think it might be a good solution where you have a context up to 10M of tokens and you do a lot of requests with that context. It might be relevant for agentic stuff which tends to produce long chat logs - especially with some gadgets on top, e.g. some 'episodes' might be completely removed as obsolete.

But I don't think it's a good solution for bigger amounts of data - as in that case it's more beneficial if that can be formed into independent memories.

Databricks and Neon

https://www.databricks.com/blog/databricks-neon
22•davidgomes•54m ago•7 comments

How to Build a Smartwatch: Picking a Chip

https://ericmigi.com/blog/how-to-build-a-smartwatch-picking-a-chip/
73•rcarmo•4h ago•28 comments

UK's Ancient Tree Inventory

https://ati.woodlandtrust.org.uk/
8•thinkingemote•52m ago•4 comments

Writing that changed how I think about programming languages

https://bernsteinbear.com/blog/pl-writing/
151•r4um•6h ago•7 comments

Ash Framework – Model your domain, derive the rest

https://ash-hq.org/
102•lawik•3d ago•32 comments

RPG in a Box

https://rpginabox.com/
176•skibz•4d ago•34 comments

Flattening Rust’s learning curve

https://corrode.dev/blog/flattening-rusts-learning-curve/
279•birdculture•12h ago•200 comments

Type-constrained code generation with language models

https://arxiv.org/abs/2504.09246
207•tough•12h ago•86 comments

Branch Privilege Injection: Exploiting branch predictor race conditions

https://comsec.ethz.ch/research/microarch/branch-privilege-injection/
383•alberto-m•18h ago•160 comments

The recently lost file upload feature in the Nextcloud app for Android

https://nextcloud.com/blog/nextcloud-android-file-upload-issue-google/
72•morsch•5h ago•14 comments

Bus stops here: Shanghai lets riders design their own routes

https://www.sixthtone.com/news/1017072
246•anigbrowl•6h ago•181 comments

Google is building its own DeX: First look at Android's Desktop Mode

https://www.androidauthority.com/android-desktop-mode-leak-3550321/
342•logic_node•20h ago•258 comments

Replicube: A puzzle game about writing code to create shapes

https://store.steampowered.com/app/3401490/Replicube/
73•poetril•9h ago•18 comments

$20K Bounty Offered for Optimizing Rust Code in Rav1d AV1 Decoder

https://www.memorysafety.org/blog/rav1d-perf-bounty/
11•todsacerdoti•2h ago•3 comments

Launch HN: Miyagi (YC W25) turns YouTube videos into online, interactive courses

183•bestwillcui•22h ago•100 comments

Mipmap selection in too much detail

https://pema.dev/2025/05/09/mipmaps-too-much-detail/
66•luu•3d ago•16 comments

Show HN: HelixDB – Open-source vector-graph database for AI applications (Rust)

https://github.com/HelixDB/helix-db/
178•GeorgeCurtis•17h ago•79 comments

Build real-time knowledge graph for documents with LLM

https://cocoindex.io/blogs/knowledge-graph-for-docs/
141•badmonster•15h ago•25 comments

Failed Soviet Venus lander Kosmos 482 crashes to Earth after 53 years in orbit

https://www.space.com/space-exploration/launches-spacecraft/failed-soviet-venus-lander-kosmos-482-crashes-to-earth-after-53-years-in-orbit
151•taubek•3d ago•112 comments

Multiple security issues in GNU Screen

https://www.openwall.com/lists/oss-security/2025/05/12/1
384•st_goliath•23h ago•231 comments

EM-LLM: Human-Inspired Episodic Memory for Infinite Context LLMs

https://github.com/em-llm/EM-LLM-model
67•jbotz•4d ago•9 comments

The Internet 1997 – 2021

https://www.opte.org/the-internet
45•smusamashah•1d ago•10 comments

Databricks to Buy Startup Neon for $1B

https://www.wsj.com/articles/databricks-to-buy-startup-neon-for-1-billion-fdded971
8•mj4e•47m ago•2 comments

PDF to Text, a challenging problem

https://www.marginalia.nu/log/a_119_pdf/
298•ingve•20h ago•166 comments

Airbnb is in midlife crisis mode

https://www.wired.com/story/airbnb-is-in-midlife-crisis-mode-reinvention-app-services/
165•thomasjudge•15h ago•322 comments

It Awaits Your Experiments

https://www.rifters.com/crawl/?p=11511
169•pavel_lishin•19h ago•60 comments

Fingers wrinkle the same way every time they’re in the water too long

https://www.binghamton.edu/news/story/5547/do-your-fingers-wrinkle-the-same-way-every-time-youre-in-the-water-too-long-new-research-says-yes
135•gnabgib•11h ago•54 comments

The world could run on older hardware if software optimization was a priority

https://twitter.com/ID_AA_Carmack/status/1922100771392520710
715•turrini•1d ago•658 comments

Using obscure graph theory to solve programming languages problems

https://reasonablypolymorphic.com/blog/solving-lcsa/
72•matt_d•14h ago•13 comments

A tool to verify estimates, II: a flexible proof assistant

https://terrytao.wordpress.com/2025/05/09/a-tool-to-verify-estimates-ii-a-flexible-proof-assistant/
55•jjgreen•4d ago•0 comments