https://arxiv.org/abs/2501.00663
https://arxiv.org/pdf/2504.13173
Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.
the current Meta outlook is embarassing tbh, the fact they have largest data of social media in planet and they cant even produce a decent model is quiet "scary" position
It's not impossible that they asses it as local maximum / dead end and are evaluating/training something completely different - and if it'll work, it'll work big time.
Sinking a bazillion dollars into models alone doesn’t get you shit except a gold star for being the valley’s biggest smartypants, because in the product world, model improvements only significantly improve all-purpose chatbots. The whole veg-o-matic “step right up folks— it slices, it dices, it makes julienne fries!” approach to product design almost never yields something focused enough to be an automatic goto for specific tasks, or simple/reliable enough to be a general purpose tool for a whole category of tasks. Once the novelty wears off, people largely abandon it for more focused tools that more effectively solve specific problems (e.g. blender, vegetable peeler) or simpler everyday tools that you don’t have to think about as much even if they might not be the most efficient tool for half your tasks (e.g. paring knife.) Professionals might have enough need and reason to go for a really great in-between tool (e.g mandolin) but that’s a different market, and you only tend to get a limited set of prosumers outside of that. Companies more focused on specific products, like coding, will have way more longevity than companies that try to be everything to everyone.
Meta, Google, Microsoft, and even Apple have more pressure to make products that sanely fit into their existing product lines. While that seems like a handicap if you’re looking at it from the “AI company” perspective, I predict the restriction will enforce the discipline to create tools that solve specific problems for people rather than spending exorbitant sums making benchmark go up in pursuit of some nebulous information revolution.
Meta seems to have a much tougher job trying to make tools that people trust them to be good at. Most of the highest-visibility things like the AI Instagram accounts were disasters. Nobody thinks of Meta as a serious, general-purpose business ecosystem, and privacy-wise, I trust them even less than Google and Microsoft: there’s no way I’m trusting them with my work code bases. I think the smart move by Meta would be to ditch the sunk costs worries, stop burning money on this, focus on their core products (and new ones that fit their expertise) and design these LLM features in when they’ll actually be useful to users. Microsoft and Google both have existing tools that they’ve already bolstered with these features, and have a lot of room within their areas of expertise to develop more.
Who knows— I’m no expert— but I think meta would be smart to try and opt out as much as possible without making too many waves.
I know I know that Elon is crazy etc but Grok example and way to integrate with core product is actually the only ways I can even came up tbh (other than character.ai flavor)
2nd tier winner is Amazon for the same reasons between being able to leverage AI with both Amazon Retail and AWS where they can sell shovels. I’ve also found their internal Nova models to be pretty good for my projects.
Microsoft will be okay because of Azure and maybe Office if they get their AI story right.
I just don’t see any world where OpenAI comes out ahead from a business standpoint as long as they are sharecroppers on other people’s hardware. ChatGPT alone will never make it worth the trillion dollar capitalization long term unless it becomes a meme stock like Tesla
how noble is Meta upholding a right moral ethic
/s
b is mostly not true but c is especially not true. I doubt they do it because it wouldn't work; it's not high quality data.
But it would also obviously leak a lot of personal info, and that really gets you in danger. Meta and Google are able to serve you ads with your personal info /because they don't leak it/.
(Also data privacy laws forbid it anyway, because you can't use personal info for new uses not previously agreed to.)
AI is a bit different.
To wit, it's dangerous to assume the value of this idea based on the lack of public implementations.
If Google is not willing to scale it up, then why would anyone else?
You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example?
While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources.
Here is a bit more information about this program: https://www.google.com/about/careers/applications/jobs/resul...
80% of the ecosystem is built on top of companies, groups and individuals publishing their research openly, not sure why Google would get more credit for this than others...
Recently, my favorite from them was lumine: https://arxiv.org/abs/2511.08892
Here's their official page: https://seed.bytedance.com/en/research
It's very likely no one is using this architecture at Google for any production work loads. There are a lot of student researchers doing fun proof of concept papers, they're allowed to publish because it's good PR and it's good for their careers.
Given the competitive nature of the AI race, it's hard to believe any of these companies are really trying to help the competition.
Most research coming out of big US labs is counter indicative of practical performance. If it worked (too) well in practice, it wouldn't have been published.
Some examples from DeepSeek:
If so, could there perhaps be a step where the LoRA is merged back into the main model?
That would be like sleeping :-)
LoRAs tend to be adapters bolted onto to systems by people other than the system designers, and they are low rank factorizations.
There is nothing low rank or adapter here.
On the one hand can learning on the job allow better training of what not to be influenced by, but on the other hand can an injected prompt have an even deeper effect on them long term.
Like, if you and your kids want to watch different movies on the living room TV then you can just give it to them and use XR glasses for yourself.
unable to interact with other people
just give it to them and use XR glasses for yourselfIn the previous sections, we first discussed Continuum Memory System (CMS) that allows for more persistent storage of memories and defines memory as a spectrum of blocks with different frequencies of update. Due to the larger capacity and constraints for scaling the parameters, often CMS requires simple learning rule but higher capacity to store more persistent knowledge. On the other hand, in the previous section, we discussed the design of a self-modifying Titans, where it can generate its own keys and so learning update to better adapt to the context. Contrary to CMS, the self-modifying Titans has a small capacity but is using a complex and expressive learning rule. Accordingly, these two systems seem to be complementary and their combination can enhance the model expressiveness from different aspects.
To this end, we present Hope architecture: A neural learning module that incorporates self-modifying Titans followed by Continuum Memory System.
https://research.google/blog/introducing-nested-learning-a-n...
That doesn't work for HOPE - a short summary can't explain what it actually does besides "self-modifying" and "continuum memory".
So it seems to be an innovation of Transformers calibre, really big (if true). It's definitely not "transformer but with such-and-such modification".
Gemini came up with a following visual metaphor for the difference:
> Transformer is a series of frozen glass panes (the weights) and a scratchpad (the attention) where it writes notes about the current text.
> The HOPE architecture involves no scratchpad. Instead, the glass panes themselves are made of smart liquid. As the data flows through, the first pane reshapes itself instantly. The second pane reshapes itself slowly. And the mechanism deciding how to reshape them is itself a tiny, intelligent machine, not just a basic math rule.
This comment was illuminating -- and IMHO an excellent example of why it's important to avoid rigid rules against posting any AI-generated content in HN comments. You gained insights by asking Gemini, and shared them, noting the source. Thank you!
So one can break a model by consistently feeding it with random, highly improbable junk? Everything would be registered as a surprise and get stored, impacting future interactions
AI needs an internal emotional state because that's what drives attention and memory. AI needs to want something.
I can see a product where you purchase a model that has basic training, and then, using the features outlined in the paper, it learns on the fly from your usage.
I can also see there being a secondary market for specially trained models, long-term memory filled with some specific skill, done in some specific way. To make a silly example, imagine buying a licence to Torvald's OS coding assistant, ready to insult your prs before you even commit them!(And possibly help you write code in Torvald's style too)
This would of course require Linus to use the model enough for it to learn,I won't comment on the likelihood of that happening: it's just a silly example after all
There's probably lots of small signals of "the user is happy with the output" plus the longer the history the more it will converge on the middle of being what you want. Including when the user says "don't do [x]" which override past stuff.
Practically, for use with a codebase development effort, if the model remembers the original design decisions, the discussions about costs and benefits, then can remember all that much later in the process, it's going to start getting really good at thinking about what the next step is, or even to make decisions about when a major refactor is neede, etc.
Small typo where the text “Virtually all successful existing sequence models rely on mean squared error…” is repeated twice within the same paragraph. Happens to the best of us.
While i have no "AI" title or work in the respective AI industry, ive spend many years thinking about AI concepts, even long before the whole NN/LLM hype started.
Maybe because of that i was always really annoyed that LLM are called AI because in my years of thinking about how an actual "human like" thinking AI might work, the things an LLM does was far below what my minimum definition was.
But when i stumbled accross the Titans paper, while it still is not an "AI" as i would call it, from my POV its a massive step towarsd the right direction.
Sometimes i consider to write all my ideas/thoughts about AI down in my blog, but than i think nobody would care anyway since im not a known figure shrug - so if not to say "look i wrote it years ago!" theres no actual point in doing so i guess.
However - im looking forward to see titans in action, and i guess it will impress us all.
"The Transformer architecture revolutionized sequence modeling with its introduction of attention"
Attention was developed before transformers.
I've always wanted to read how something like Cursor manages memory. It seems to have developed a long history of all of prompts and understands both the codebase and what I'm building slightly more over time, causing less errors.
P.S. This quote from the paper sounds just like LLM output:
> "This memory module provides significantly higher expressive power, allowing the model to summarize large volumes of information without losing important context. The model isn't simply taking notes; it's understanding and synthesizing the entire story. Crucially, Titans doesn’t just passively store data. It actively learns how to recognize and retain important relationships and conceptual themes that connect tokens across the entire input."
... anyone here familiar with the RPG Eclipse Phase?
There the titans did incest, birthed the olympians, then the youngest of the titans castrated his dad and took all power for himself, and then Zeus and the olympians waged a decade long war against him which they won.
(In Eclipse Phase, TITAN - the Total Information Tactical Awareness Network - mulched humanity when it went rogue.)
So if we are viewing this through the needle in hey stack lens: The needle was very surprising for the base model, so going forward, when it see anything of the same nature, the memory module will not just give you hay, but the needle, because it made a special note of it when it went through the haystack 1 million tokens ago, because the needle was surprising.
The Transformer's normal attention mechanism is already secretly trying to be a long-term memory system. Every time it writes a new KV pair into the cache, it’s desperately trying to “remember” that token forever.
But it’s doing it in the dumbest possible way: by hoarding an ever-growing pile of raw vectors, then frantically dot-product searching through the pile every single step. It’s like a hoarder who never throws anything away and has to rummage through mountains of junk to find the one receipt they need. Of course it chokes at long contexts.
Titans/MIRAS looks at that mess and says: “Why store memory in a growing garbage pile of vectors? Store it in the weights of a deep neural network instead — and let that network keep training itself in real time, but only on the stuff that actually surprises it.” That’s literally it.
Using the Tim Cook Martian example: The model is cruising through boring financial numbers → attention is doing its normal thing, KV cache is growing, but nothing is really sticking.
Suddenly: “Tim Cook is a Martian.”
Normal attention would just add one more KV pair to the pile and pray it doesn’t get drowned out later.
Titans instead goes: “Holy shit, reconstruction error off the charts → this does NOT fit my current memory at all → massive gradient → actually rewrite huge chunks of the memory MLP’s weights right now so this fact is burned in forever.”
From that moment on, the memory MLP has physically changed its internal wiring. Any future query that even vaguely smells like “Tim Cook” or “Martian” will make the activations explode through the newly rewired paths and spit out a vector screaming “MARTIAN” at the frozen attention layers.
The frozen attention (which is still doing its normal job on the short window) suddenly sees this one extra “virtual token” in its context that is confidently yelling the surprising fact → it attends hard to it → the model answers as if the Martian revelation happened one token ago, even if it was 2 million tokens back.
It looks exactly like a super-attention mechanism that only “primes” or “locks in” the surprising needles and deliberately forgets or ignores the hay. And it is also a way to fine tune one the fly permanently for the current context.
I think…
Alifatisk•9h ago