frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•2m ago•0 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
1•m00dy•4m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•4m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•11m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•14m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•14m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•15m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•16m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•16m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•17m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•17m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•22m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•22m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•23m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•23m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•31m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•32m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•34m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•34m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•34m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•34m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•35m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•36m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•36m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•36m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•38m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•38m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•41m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•42m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•42m ago•0 comments
Open in hackernews

New Prompt Engineering Metaheuristic – (NoA) Network of Agents

https://github.com/andres-ulloa-de-la-torre/NoA
4•scraper01•5mo ago

Comments

scraper01•5mo ago
I've been looking into the idea of "deep thinking" in AI, but it seems reserved for big models with huge compute budgets. I wanted to see if a different approach was possible: trading instantaneous computation for a slower burn. To explore this, I've been building an open-source research project called Network of Agents (NoA). The goal is to turn a modest laptop (I'm developing on a 32GB RAM machine) into a "solution mining" rig. You can set up a hard problem, and using a local LLM (via Ollama and a quantized Qwen model like Qwen 30b a3b), let a society of agents work on it for hours or days, iteratively refining their collective answer. The Core Idea: Backpropagation with Natural Language The system is built with LangGraph and is inspired by neural networks. It runs in epochs, with each epoch consisting of a "Forward Pass" and a "Reflection Pass". 1. The Forward Pass (Inference): • Instead of numerical weights, the network's "weights" are the natural language system prompts of its agents. • The process starts by procedurally generating a multi-layered network of agents. The first layer gets cognitive diversity from MBTI archetypes and "seed verbs" related to the user's problem. • Subsequent "hidden" layers are built by having an agent-analyst chain create a "hard request" designed to challenge the previous layer, then spawning a new agent specialized for that challenge. • Information flows through the network layer by layer, with the combined JSON outputs of one layer being broadcast as input to all agents in the next. • 2. The Reflection Pass (Learning): This is where I've tried to simulate backpropagation. • Critique as the "Loss Function": After the final layer's outputs are synthesized into a single solution, a critique_agent assesses it against the original problem and generates a constructive critique. • Propagating the "Gradient": This critique is the error signal. It's propagated backward through the network. An agent in layer N-1 receives a targeted critique based on its contribution to the final answer generated by layer N. • The "Optimizer" Meta-Prompt: At each step of the backward pass, an update_agent_prompts_node uses the incoming critique as the main input to a meta-prompt. This meta-prompt's job is to completely rewrite and evolve the receiving agent's system prompt—its skills, attributes, and even its career—to better address the critique.

The entire network learns and adapts its own instructions, not through a central controller, but through a distributed process of peer-to-peer challenge. The Long-Term Vision: A New Kind of Training Data This is the part that I find most exciting. Every run of this system produces a complete, structured trace of a multi-agent collaborative process: the initial agent personas, the layer-by-layer reasoning (CoT traces), the critiques, and the evolution of each agent's prompts across epochs. This is a new kind of dataset that captures the dynamics of reasoning, not just static information. My long-term, ambitious goal is to use this data to train a "World Language Model" – a model trained not just on text, but on the fundamental patterns of collaboration, error correction, and social intelligence. This is an early-stage research project. The code is available for anyone to run, and the immediate roadmap includes dynamic memory for small models, P2P networking for distributed mining, and better visualization. I'd love to get this community's feedback. What do you think of this approach? Is the analogy to backpropagation sound? How would you improve the meta-prompts that drive the evolution? Thanks for reading.