frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google demonstrates 'verifiable quantum advantage' with their Willow processor

https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/
106•AbhishekParmar•1h ago•57 comments

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

https://www.botanica.software/blog/cryptographic-issues-in-cloudflares-circl-fourq-implementation
80•botanica_labs•2h ago•19 comments

Linux Capabilities Revisited

https://dfir.ch/posts/linux_capabilities/
76•Harvesterify•2h ago•12 comments

Designing software for things that rot

https://drobinin.com/posts/designing-software-for-things-that-rot/
73•valzevul•18h ago•8 comments

MinIO stops distributing free Docker images

https://github.com/minio/minio/issues/21647#issuecomment-3418675115
446•LexSiga•10h ago•268 comments

AI assistants misrepresent news content 45% of the time

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
201•sohkamyung•2h ago•154 comments

The security paradox of local LLMs

https://quesma.com/blog/local-llms-security-paradox/
50•jakozaur•3h ago•36 comments

SourceFS: A 2h+ Android build becomes a 15m task with a virtual filesystem

https://www.source.dev/journal/sourcefs
48•cdesai•3h ago•16 comments

Die shots of as many CPUs and other interesting chips as possible

https://commons.wikimedia.org/wiki/User:Birdman86
132•uticus•4d ago•26 comments

Internet's biggest annoyance: Cookie laws should target browsers, not websites

https://nednex.com/en/the-internets-biggest-annoyance-why-cookie-laws-should-target-browsers-not-...
337•SweetSoftPillow•4h ago•396 comments

French ex-president Sarkozy begins jail sentence

https://www.bbc.com/news/articles/cvgkm2j0xelo
265•begueradj•10h ago•345 comments

Go subtleties

https://harrisoncramer.me/15-go-sublteties-you-may-not-already-know/
150•darccio•1w ago•104 comments

Tesla Recalls Almost 13,000 EVs over Risk of Battery Power Loss

https://www.bloomberg.com/news/articles/2025-10-22/tesla-recalls-almost-13-000-evs-over-risk-of-b...
136•zerosizedweasle•4h ago•115 comments

Infracost (YC W21) Hiring First Dev Advocate to Shift FinOps Left

https://www.ycombinator.com/companies/infracost/jobs/NzwUQ7c-senior-developer-advocate
1•akh•4h ago

Patina: a Rust implementation of UEFI firmware

https://github.com/OpenDevicePartnership/patina
66•hasheddan•1w ago•12 comments

Farming Hard Drives (2012)

https://www.backblaze.com/blog/backblaze_drive_farming/
12•floriangosse•6d ago•3 comments

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
121•zdw•12h ago•51 comments

Show HN: Cadence – A Guitar Theory App

https://cadenceguitar.com/
135•apizon•1w ago•29 comments

The Dragon Hatchling: The missing link between the transformer and brain models

https://arxiv.org/abs/2509.26507
111•thatxliner•3h ago•65 comments

Greg Newby, CEO of Project Gutenberg Literary Archive Foundation, has died

https://www.pgdp.net/wiki/In_Memoriam/gbnewby
354•ron_k•7h ago•59 comments

Cigarette-smuggling balloons force closure of Lithuanian airport

https://www.theguardian.com/world/2025/oct/22/cigarette-smuggling-balloons-force-closure-vilnius-...
49•n1b0m•3h ago•17 comments

Sequoia COO quit over Shaun Maguire's comments about Mamdani

https://www.ft.com/content/8e6de299-3eb6-4ba9-8037-266c55c02170
15•amrrs•51m ago•12 comments

Knocker, a knock based access control system for your homelab

https://github.com/FarisZR/knocker
49•xlmnxp•7h ago•76 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
446•tamnd•1d ago•275 comments

Ghostly swamp will-O'-the-wisps may be explained by science

https://www.snexplores.org/article/swamp-gas-methane-will-o-wisp-chemistry
23•WaitWaitWha•1w ago•10 comments

Distributed Ray-Tracing

https://www.4rknova.com//blog/2019/02/24/distributed-raytracing
21•ibobev•5d ago•7 comments

Starcloud

https://blogs.nvidia.com/blog/starcloud/
129•jonbaer•5h ago•172 comments

Power over Ethernet (PoE) basics and beyond

https://www.edn.com/poe-basics-and-beyond-what-every-engineer-should-know/
217•voxadam•6d ago•170 comments

rlsw – Raylib software OpenGL renderer in less than 5k LOC

https://github.com/raysan5/raylib/blob/master/src/external/rlsw.h
228•fschuett•19h ago•87 comments

Ask HN: Our AWS account got compromised after their outage

364•kinj28•1d ago•87 comments
Open in hackernews

The Dragon Hatchling: The missing link between the transformer and brain models

https://arxiv.org/abs/2509.26507
111•thatxliner•3h ago

Comments

bob1029•2h ago
The nature of the abstract is making me hesitate to go any further on this one. It doesn't even seem to fit within arxiv's web layout.
CaptainOfCoit•2h ago
Judging science based on the layout of a webpage feels less than ideal :/ The PDF seems to render just fine.
bob1029•1h ago
This doesn't change the fact that the PDF contains a ~440 word abstract. It comes off as a defensive marketing pitch when it's this long.
batuhandumani•1h ago
You're truly judging the book by its cover, but I have to give credit where it's due the abstract is very long.
oofbey•1h ago
It’s a clear signal the paper is gonna be hard to read. It takes a ton of work to compress complex ideas down to 8 pages for a conference paper. But that work makes it easier to understand. This paper did not do that work. In fact it seems they did the opposite: try to write a LONG paper as if that shows how much originality they have.
moffkalast•2h ago
Another day, another neuromorphic AI group still trying to make aeroplanes with flapping wings. This time it'll surely work, they've attached a crank to the jet engine to drive their flapping apparatus.
bobbyprograms•2h ago
Birds safely do VTOL. Humans still haven’t figured that out yet (helicopters are so dangerous).

I think that it is necessary we try all things because LLMs as we know them take too much energy.

toxik•2h ago
Uh, this is a strange thing to ask, but have you seen birds fly? It is most certainly not vertical take off (or landing.)
deepanwadhwa•2h ago
It doesn't sound strange at all. Good question.
falcor84•1h ago
Just regarding VTOL, we absolutely have figured out how to do it with quadcopter drones safely handling weights comparable to the largest birds, and it seems to me that scaling these up to carry humans will be relatively straightforward [0].

[0] https://en.wikipedia.org/wiki/Passenger_drone

fxwin•2h ago
I haven't read through the entire thing yet, but the long abstract combined with the way the acronym BDH is introduced (What does the B stand for?) along with the very "flowery" name (When neither "dragon" nor "hatchling" appears again past page 2) is rather offputting

- It seems strange to make use of the term "scale-free" and then defer a definition until half way through the paper (in fact, the term is mentioned 3 times after, and 14 times before said definition)

- This might just be CS people doing CS things, but the notation in the paper is awful: Claims/Observations end with a QED-symbol (for example on pages 29 and 30) but without a proof

- They make strong claims about performance and scaling ("It exhibits Transformer-like scaling laws") but the only (i think?) benchmark is a translation task comparison with <1B models, ,which is ~2 orders of magnitude smaller than sota

mwigdahl•2h ago
The B stands for "Baby". Baby Dragon Hatchling is their model name.
fxwin•2h ago
Seems like this should be in the paper! Thanks though
halfdeadcat•1h ago
It's a 'dragon hatchling' because it is 'scale-free'.
fxwin•43m ago
Hah, that's pretty clever if it's true .D
PaulRobinson•2h ago
[Not a specialist, just a keen armchair fan of this sort of work]

> In addition to being a graph model, BDH admits a GPU-friendly formulation.

I remember about two years ago people spotting that if you just moved a lot of weights through a sigmoid and reduced floats down to -1, 0 or 1, we barely lost any performance from a lot of LLM models, but suddenly opened up the ability to use multi-core CPUs which are obviously a lot cheaper and more power efficient. And yet, nothing seems to have moved forward there yet.

I'd love to see new approaches that explicitly don't "admit a GPU-friendly formulation", but still move the SOTA forward. Has anyone seen anything even getting close, anywhere?

> It exhibits Transformer-like scaling laws: empirically BDH rivals GPT2 performance on language and translation tasks, at the same number of parameters (10M to 1B), for the same training data.

That is disappointing. It needs to do better, in some dimension, to get investment, and I do think alternative approaches are needed now.

From the paper though there are some encouraging side benefits to this approach:

> [...] a desirable form of locality: important data is located just next to the sites at which it is being processed. This minimizes communication, and eliminates the most painful of all bottlenecks for reasoning models during inference: memory-to-core bandwidth.

> Faster model iteration. During training and inference alike, BDH-GPU provides insight into parameter and state spaces of the model which allows for easy and direct evaluation of model health and performance [...]

> Direct explainability of model state. Elements of state of BDH-GPU are directly localized at neuron pairs, allowing for a micro-interpretation of the hidden state of the model. [...]

> New opportunities for ‘model surgery’. The BDH-GPU architecture is, in principle, amenable to direct composability of model weights in a way resemblant of composability of programs [...]

These, to my pretty "lay" eyes look like attractive features to have. The question I have is whether the existing transformer based approach is now "too big to fail" in the eyes of people who make the investment calls, and whether this will get the work it needs to get it from GPT2 performance to GPT5+.

sdenton4•2h ago
/I'd love to see new approaches that explicitly don't "admit a GPU-friendly formulation", but still move the SOTA forward. Has anyone seen anything even getting close, anywhere?/

The speedup from using a GPU over a CPU is around 100x, as a rule of thumb. And there's been an immense amount of work maximizing throughput when training on a pile of GPUs together... And a sota model will still take a long time to train. So even if you do have a non-GPU algo which is better, it'll take you a very very long time to train it - by which point the best GPU algos will have also improved substantially.

gcr•2h ago
a cursory sniff test of the abstract reveals greater-than-trace presence of bullshit

I would trust this paper far more if it didn’t trip my “crank” radar so aggressively

Red flags from me:

- “Biologically-inspired,” claiming that this method works just like the brain and therefore should inherit the brain’s capabilities

- Calling their method “B. Dragon Hatchling” without explaining why, or what the B stands for, and not mentioning this again past page 2

- Saying all activations are “sparse and positive”? I get why being sparse could be desirable, but why is all positive desirable?

These are stylistic critiques and not substantive. All of these things could be “stressed grad student under intense pressure to get a tech report out the door” syndrome. But a simpler explanation is that this paper just lacks insight

empiko•2h ago
What about the fact that it has 45 pages with exactly one comparison with transformer architecture (Figure 7)? This is just a fluff piece for a company trying to raise funding.
daemonologist•2h ago
I suppose all positive saves you a bit per weight, sort of, and potentially some circuitry to deal with negative numbers.
AmazingTurtle•2h ago
Your quotes (“ and ”) and apastrophes (’) make me think this was written by AI as no sane human wouldn't use " or '
fxwin•2h ago
could just be non-english autocorrect or keyboard layout
gcr•1h ago
pardon, i use those for literals/proper nouns. it's a "typing quirk," LLMs wouldn't output with that style
cootsnuck•51m ago
Some people draft stuff in different places, different devices, or whatever and then copy paste, just a heads up.
terminalshort•1h ago
> claiming that this method works just like the brain and therefore should inherit the brain’s capabilities

Your words, not the author's. They did not make this claim.

ZeroCool2u•2h ago
This is one of the first papers in the neuromorphic vein that I think may hold up. It would be amazing if it did too due to the following properties:

-Linear (transformer) complexity at training time

-Linear scaling with number of tokens

-Online learning(!!!)

The main point that made me cautiously optimistic:

-Empirical results on par with GPT-2

I think this is one of those ideas that needs to be tested with scaled up experiments sooner rather than later, but someone with budget needs to commit. Would love to see HuggingFace do a collab and throw a bit of $$$ at it with a hardware sponsor like Nvidia.

deviation•2h ago
I guarantee if there's even a 0.1% chance of this architecture eventually outperforming traditional ones, then Zuckerberg et al are already eating the cost and have teams spinning up experiments doing just that.
ZeroCool2u•2h ago
Absolutely agreed, but we may not even hear about it as Meta has made it clear they're not necessarily committed to the open source first policy at this point.
nickpsecurity•56m ago
That's not true. The AI industry appears to play a game of follow the leader copying other companies and major researchers. There's all kinds of good ideas we never see applied by big companies. So, it's not safe to assume they tried them all and they didn't work.

In fact, we've sometimes seen new companies show up with models based on research big companies didn't use, the new models are useful or better in some way, and people use them or big companies acquire them. I'd say that's proof big companies miss a lot of good ideas internally.

kouteiheika•2h ago
> Performance of BDH-GPU and GPTXL versus model size on a translation task. [...] On the other hand, GPTXL [...] required Dropout [...] The model architecture follows GPT2

I love when a new architecture comes out, but come on, it's 2025, can we please stop comparing fancy new architectures to the antiquated GPT2? This makes the comparison practically, well, useless. Please pick something more modern! Even the at-this-point ubiquitous Llama would be a lot better. I don't want to have to spend days of my time doing my own benchmarks to see how it actually compares to a modern transformer (and realistically, I was burned so many times now that I just stopped bothering).

Modern LLMs are very similar to GPT2, but those architectural tweaks do matter and can make a big difference. For example, take a look at the NanoGPT speedrun[1] and look at how many training speedups they got by tweaking the architecture.

Honestly, everyone who publishes a paper in this space should read [2]. The post talks about optimizers, but this is also relevant to new architectures too. Here's the relevant quote:

> With billions of dollars being spent on neural network training by an industry hungry for ways to reduce that cost, we can infer that the fault lies with the research community rather than the potential adopters. That is, something is going wrong with the research. Upon close inspection of individual papers, one finds that the most common culprit is bad baselines [...]

> I would like to note that the publication of new methods which claim huge improvements but fail to replicate / live up to the hype is not a victimless crime, because it wastes the time, money, and morale of a large number of individual researchers and small labs who run and are disappointed by failed attempts to replicate and build on such methods every day.

Sure, a brand new architecture is most likely not going to compare favorably to a state-of-art transformer. That is fine! But at least it will make the comparison actually useful.

[1] -- https://github.com/KellerJordan/modded-nanogpt

[2] -- https://kellerjordan.github.io/posts/muon/#discussion-solvin...

jacobgorm•1h ago
How would you actually get funding for that, if not by demonstrating that it works on smaller models first?
lackoftactics•2h ago
The authors seem to have good credentials and I found the repo with code for this paper.

https://github.com/pathwaycom/bdh

There isn't a ton of code and there are a lot comments in my native language, so at least that is novel to me

wigster•2h ago
but what about those tiny brain tubes discovered last week? are they in the model? ;-)
CaptainOfCoit•2h ago
> It exhibits Transformer-like scaling laws: we find empirically that BDH rivals GPT2-architecture Transformer performance on language and translation tasks, at the same number of parameters (10M to 1B), for the same training data.

I'm assuming they're using "rivals GPT2-architecture" instead of "surpasses" or "exceeds" because they got close, but didn't manage to create something better. Is that a fair assessment?

ACCount37•1h ago
Pretty much.

Everyone and their dog says "transformer LLMs are flawed", but words are cheap - and in practice, no one seems to have come up with something that's radically better.

Sidegrades yes, domain specific improvements yes, better performance across the board? Haha no. For how simple autoregressive transformers seem, they sure set a high bar.

alyxya•2h ago
I tried understanding the gist of the paper, and I’m not really convinced there’s anything meaningful here. It just looks like a variation of the transformer architecture inspired by biology, but no real innovation or demonstrated results.

> BDH is designed for interpretability. Activation vectors of BDH are sparse and positive.

This looks like the main tradeoff of this idea. Sparse and positive activations makes me think the architecture has lower capacity than standard transformers. While having an architecture be more easily interpretable is a good thing, this seems to be a significant cost to the performance and capacity when transformers use superposition to represent features in the activations spanning a larger space. Also I suspect sparse autoencoders already make transformers just as interpretable as BDH.

jimbo808•2h ago
There isn't. The title is totally clickbait.

Anything "brain-like" that fits into one single paper is bullshit.

ljlolel•1h ago
A real scientist wouldn’t use an imprecise term like “brain-like”
astroflection•1h ago
The actual paper's title: "The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain"

Don't berate the authors for the HN submitter's carelessness.

jimbo808•1h ago
The actual title is just as click baity
nickpsecurity•58m ago
That last line isn't true. To be brain-like, it only needs to imitate one thing in the brain. That thing is udually tested in isolation against observed results in human brains. Then, people will combine multiple, brain-inspired components in various ways.

That's standard in computational neuroscience. Our standard should simply be whether they are imitating an actual structure or technique in the brain. They usually mention that one. If they don't, it's probably a nonsense comparison to get more views or funding.

jimbo808•38m ago
I am genuinely baffled by this reply. Every single sentence you've typed is complete and utter nonsense. I'm going to bookmark this as a great example of the Dunning-Kruger effect in the wild.

Just to illustrate the absurdity of your point: I could claim, using your standard, that a fresh pile of cow dung is brain-like because it imitates the warmth and moistness of a brain.

busssard•1h ago
This is like at the beginning or the end of the Crypto Bubble. Publish a whitepaper for the next model architecture and hope that uninformed people with money blow it up your companys... i mean blow up the economy.... i mean blow , ahh whatever you know
raincole•1h ago
> the end of the Crypto Bubble

BTC literally hit all time high this month, fyi.

cootsnuck•57m ago
What's your point?

House prices are at all time highs too. That doesn't mean the housing bubble never happened.

AnimalMuppet•52m ago
Means it's not the end of the crypto bubble.

Unless you're going to claim that previous large drops in crypto were perhaps bubbles, but this time it's real...

jimbo808•22m ago
Prices have risen by orders of magnitude, untethered to any measurable fundamentals, then crashed, multiple times. I'm not sure what other definition of bubble you're operating with...
jimbo808•21m ago
Remember the dotcom bubble? There are still websites, by the way. Doesn't mean it wasn't a bubble.
oofbey•1h ago
Attention mechanisms are wonderfully interpretable as is. You can literally see which tokens each token is attending to. People don’t bother much these days. But that’s not a strong selling point.
abdibrokhim•2h ago
attention is all you need btw
recitedropper•2h ago
Repo seems legit, and some of the ideas are pretty novel. As always though, we'll have to see how it scales. A lot of interesting architectures have failed the GPT3+ scale test.

As a sidenote--does anyone really think human-like intelligence on silica is a good idea? Assuming it comes with consciousness, which I think is fair to presume, brain-like AI seems to me like a technology that shouldn't be made.

This isn't a doomer position; that human-like AI would bring about the apocalypse. It is one of empathy: At this point in time, our species isn't mature enough to have the ability to spin up conscious beings so readily. I mean look how we treat each other--we can't even treat beings we know to be conscious with kindness and compassion. Mix our immaturity with a newfound ability to create digital life and it'll be the greatest ethical disaster of all time.

It feels like researchers in the space think there is glory to be found in figuring out human-like intelligence on silicon. That glory has even attracted big names outside the space (see John Carmack), under the presumption that the technology is a huge lever for good and likely to bring eternal fame.

I honestly think it is a safer bet that, given how we aren't ready for such technology, the person / team who would go on to actually crack brain-like AI would be remembered closer to Hitler than to Einstein.

lr4444lr•2h ago
I mean look how we treat each other--we can't even treat beings we know to be conscious with kindness and compassion. Mix our immaturity with a newfound ability to create digital life and it'll be the greatest ethical disaster of all time.

Or maybe if we had artificial life to abuse, it would be a suffcient outlet for our destructive and selfish impulses so that we would do less of it to genuine life. Maybe it's just an extension of sport contests that scratch that tribal itch to compete and win. There are no easy answers to these questions.

recitedropper•1h ago
In this thought experiment, I am considering artificial life genuine. I would agree that there could be productive outlets for our selfish impulses if there was something that mimicked their targets without consciousness to experience the externalities of such impulses.

That said, I think probably the best path would just be to build and foster technologies that help our species mature, so if one day we do get the ability to spin-up conscious beings artificially, it can be done in a manner that adds more beauty rather than despair to our universe.

ACCount37•1h ago
We have no clue what "consciousness" even is, let alone what the prerequisites are. Our best guesses are just that. Guesses. Guesswork based on information so sparse that astronomers in ancient Greece might have had a better time guessing what the stars truly are.

For all we know, an ICE in a 2001 Toyota truck is conscious too - just completely inhuman in its consciousness.

Nonetheless, here we are - building humanlike intelligence. Because it's useful. Having machines that think like humans do is very useful. LLMs are a breakthrough in that already - they implement a lot of humanlike thinking on a completely inhuman substrate.

recitedropper•1h ago
For the record, I'm agnostic to whether or not consciousnses is possible upon silica. I think it is pretty safe to say though that it likely is an emergent property of specifically-configured complex systems, and humanlike intelligence on silica is certainly something that might qualify.

I don't think appealing to whether or not inanimate objects may be conscious is sufficient to discount that we are toying with a different beast in machine learning. And, if we were to discover that inanimate objects are in-fact conscious, that would be an even greater reason to reconfigure our society and world around compassion.

I agree that LLMs are a great breakthrough, and I think there are many reasons to doubt consciousness there. But I would suggest we rest on our laurels for a bit, and see what we can get out of LLMs, rather than push to create something that is closer to mimicking humans because it might be more useful. From the evil perspective of pure utility, slaves are quite useful as well.

raducu•1h ago
> human-like intelligence on silica is a good idea.

The famous Chinese Room Translator -- silica is irelevant, you could probably implement LLM-like algorithm with pen and paper, do you still think the paper could suffer or be "conscious"?

recitedropper•1h ago
I am empathetic to arguments against consciounsess being computational. Definitely strange to imagine an algorithm played out on trillions of abacuses being conscious.

That said, I don't think it is a sufficient appeal to entirely discount the possibility that the right process implemented on silicon could in fact be conscious in the same way we are. I'm open to whether or not it is possible--I don't have a vested interest in the space--but silica seems to be a medium that can possible hold the level of complexity for something like consciousness to emerge.

So this is to say that I agree with you that consciousness likely requires substrate-specific embodiment, but I'm open to silica being a possible substrate. I certainly don't think it can be discounted at this point in time, and I'd suggest that we don't risk a digital holocaust on the bet that it can't.

varjag•46m ago
Suffering isn't necessary outside evolutionary pressures. But if a bouillon of animal proteins could be conscious why not.
cootsnuck•45m ago
Yea, actual "human-like" consciousness would be an ethical nightmare. Any sane company should not be legitimately pursuing this.

My most generous interpretation of Anthropic's flirting with it is they too think it would be a nightmare and are hyper-vigilant. (My more realistic interpretation is that it's just some mix of a Frankenstein complex and hype-boosting.)

recitedropper•19m ago
I hope your generous interpretation is right... I can't really tell what's going on with Anthropic's theater either. They definitely seem like they are vigilant of bad outcomes, going as far as to publish their own economic index trying to monitor how AI is affecting labor markets.

That said, the cynic in me thinks they give lip service to these things while pushing fully ahead into the unknown on the presumption of glory and a possibility of abundance. A bunch of the leadership are EAs who subscribe to a kind of superintelligence eschatology that goes as far as to give a shot at their own immortality. Given that, I think they act on the assumption that AGI is a necessity, and they'd rather take the risks on everyone's behalf than just not create the technology in the first place.

Them recently flirting with money from the gulf states is a pretty concerning signal pointing to them being more concerned with their own goals rather than ethics.

vatsachakrvthy•1h ago
Great work by the authors!

But... As Karpathy stated on the Dwarkesh podcast, why do we need brain inspired anything? As Chollet says a transformer is basically a tool for differentiable program synthesis. Maybe the animal brain is one way to get intelligence, but it might actually be easier to achieve in-silica, given the faster computational ability and higher reproducibility of calculations

jacobgorm•1h ago
I read the through first 20+ pages 1.5 weeks ago, and found it quite inspiring. I tried submitting it here, but it did not catch on at the time. I watched the podcast interview with the founder, who seems very smart, but that made me realize that not everything described in the paper has been released as open source, which was a bit disappointing.
neom•46m ago
This the podcast you watched? https://www.youtube.com/watch?v=mfV44-mtg7c
pyeri•30m ago
I've just stepped into LLMs, pytorch, transformers, etc. on the learning path, I don't know much about advanced AI concepts yet. But I somehow feel that scale alone isn't going to solve AGI problem, there is something fundamental about the nature of intelligence itself that we don't know yet, cracking that will lead to unleashing of true AGI.
badmonster•19m ago
How does BDH handle long-range dependencies compared to Transformers, given its locally interacting neuron particles? Does the scale-free topology implicitly support efficient global information propagation?