frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•53s ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•1m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•6m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•9m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•11m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•12m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•13m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•14m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•14m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•15m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•15m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•19m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•22m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•22m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•28m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•28m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•29m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•32m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•35m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•35m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•35m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•35m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•37m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•39m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•41m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•43m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•44m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•44m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•47m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•50m ago•1 comments
Open in hackernews

Graph Continuous Thought Machines

1•Sai-dewa•6mo ago
We propose a method by which a neural graph continuous thought machines dispositional nodes connections may be designed faithful to a human brain. A graph continuous thought machine replaces the synapse and neuron level models with a graph cnn .In some sense, the nodes of the graph at any one time represent the instantiation of the nodes of the dispositional neural model it is part of. Instantiating only those nodes that are currently firing. The GCNN then outputs the next graph as the system searches graph space for solutions as guided by learnt property vectors.The outputs from its neural synchronization matrix then modulate the attention given to inputs as well as to the nodes of the dispositional network. This way it designs The dispositional neural models connections (disposition for particular graphs to be next after others). We then employ neural training modules which are spiking neural networks which have their nodes mapped with keys from a musical keyboard. In particular when exposed to the state of teacher systems the nodes are trained to musically harmonize, while when exposed to the state of the untrained agent they are dissonant. The agent then tries to maximise consonance in the spiking network by using it as a reward signal. By this method the agent is trained to perform like the teacher system. We introduce text conditioned neural training modules, that condition the input on text. We show a method to modulate not just the behavior of the system , but the connectivity of the dispositional network of a GCTM. https://www.researchgate.net/publication/392733228_Text_Conditioned_Self_Architecture_Search_for_Building_Brain_Like_Connectivity_by_Describing_It

Comments

Sai-dewa•6mo ago
have a paper on graph continuous thought machines that replace the synapse model and the neuron models with a graph convolutional network.

The gcnn outputs the next graph in the thought process as guided by learnt property vectors.

What's interesting is that the synchronization matrix regulates the attention given to the nodes as well as the input.

So these nodes may be seen as neurons in their own right. And consecutive graphs have connections between them that sent virtual signals and caused them to spike.

The nodes and potential nodes exist in a dispositional neural network, and only the nodes that are currently activated are instantiated in the gcnn.

So as the outputs from the synchronization matrix modulate attention, a subset of the attended dispositional neurons will represent memory.

While other parts of the dispositional network and parts of the input represent keys that index the next presentation of memory.

In fact only the pre frontal cortex dispositional nodes will contribute to the synchronization matrix.

So the pfc performs read and write operations to memory this way.

Sai-dewa•6mo ago
So the actual connections between dispositional neurons changes as the property vectors are learnt