frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Humans and Agents in Software Engineering Loops

https://martinfowler.com/articles/exploring-gen-ai/humans-and-agents.html
1•ColinEberhardt•8s ago•0 comments

Crypto bill hits new impasse, raising doubts over its future

https://www.reuters.com/business/finance/crypto-bill-hits-new-impasse-raising-doubts-over-its-fut...
1•petethomas•35s ago•0 comments

Zammad open-source helpdesk introduces AI without LLM lock-in

https://zammad.com/en/company/press/zammad-introduces-ai-features-with-free-choice-of-large-langu...
1•JulieDenie•47s ago•0 comments

Fixing a major evaluation order footgun in Rye 0.2

https://ryelang.org/blog/posts/rye-0.2-whats-new/
1•todsacerdoti•1m ago•0 comments

Electrobun and WGPU: high perf games and ML in TypeScript

https://twitter.com/yoavcodes/status/2029718745552228514
1•yoav•5m ago•0 comments

UK to delay difficult decisions on AI copyright rules

https://www.ft.com/content/e759a712-eddf-4bdd-b4d9-03446f8c6545
1•admp•5m ago•0 comments

Knuth Tests using Claude Sonnet 4.6 problem 1.1.4

1•daly•6m ago•0 comments

JP Jonathan Blow on LSP [video]

https://www.youtube.com/watch?v=ApmD4IQP-Ac
2•jz391•12m ago•0 comments

Notes from the Inside: The Skinwalker

https://www.therage.co/keonne-rodriguez-letter-skinwalker/
1•Cider9986•15m ago•1 comments

Show HN: Anchor Engine – Deterministic Semantic Memory for LLMs local (<3GB RAM)

https://github.com/RSBalchII/anchor-engine-node/
2•BERTmackliin•20m ago•1 comments

V1: Unifying Generation and Self-Verification for Parallel Reasoners (ArXiv)

https://arxiv.org/abs/2603.04304
2•harman2607•22m ago•1 comments

U.S. offers India a 30-day waiver for buying Russian oil

https://www.cnbc.com/2026/03/06/us-india-waiver-russian-oil-iran-war-energy-supply-worries-.html
5•geox•25m ago•0 comments

Nuvix – open-source BaaS with a query DSL more expressive than PostgREST

2•ravikantsaini•28m ago•0 comments

Awesome Agent Harness Engineering

https://github.com/AutoJunjie/awesome-agent-harness
2•autojunjie•30m ago•0 comments

Hegseth Finds His Footing as Epic Fury's Front Man

https://www.wsj.com/politics/national-security/hegseth-finds-his-footing-as-epic-furys-front-man-...
3•petethomas•30m ago•0 comments

Show HN: Garden Horizons Calculator – Built by a Beginner via "Vibe Coding"

https://gardenhorizonscalculator.pro
2•hyperstatic•31m ago•0 comments

Floating wind turbines could soon power AI data centers at sea

https://electrek.co/2026/03/05/floating-wind-turbines-could-soon-power-ai-data-centers-at-sea/
4•JeanKage•31m ago•0 comments

GPT-5.4 in Microsoft Foundry

https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/introducing-gpt-5-4-in-microsoft-f...
2•swaminarayan•32m ago•0 comments

Skyrim, Starfield, and the risks of AI-generated content

https://yadin.com/notes/generated/
2•dryadin•35m ago•0 comments

Show HN: ABES – a memory architecture for belief revision in AI agents

https://github.com/Aftermath-Technologies-Ltd/adaptive-belief-ecology-system
1•bradkinnard•37m ago•0 comments

Open Call for Posters [Center for Human-Compatible AI]

https://workshop.humancompatible.ai/#call-for-posters
1•chai-admin•43m ago•1 comments

How Munich became an engine for defence start-ups

https://www.ft.com/content/162076f9-3eed-4b11-9a36-d5717c8b357b
2•rustoo•45m ago•0 comments

Show HN: A simple, auto-layout family tree generator

https://familytreeeasy.com
4•familytreeeasy•47m ago•2 comments

Ask HN: How are LLMs supposed to be used for warfare?

1•sirnicolaz•48m ago•3 comments

GPT-5.4 Scores 0.62 F1 on Understanding Handwritten Edits in Dickens

https://dorrit.pairsys.ai/
2•svcrunch•50m ago•0 comments

Ask HN: Do AI startups even bother with patents anymore?

3•maxtqw•52m ago•2 comments

Show HN: EnvSentinel – contract-driven .env validation, zero dependencies

https://github.com/tweakyourpc/envsentinel
1•tweakyourpc•55m ago•0 comments

A basket of new fruit varieties is coming your way

https://www.economist.com/science-and-technology/2026/03/04/a-basket-of-new-fruit-varieties-is-co...
3•vinni2•56m ago•1 comments

Quantized Hall Drift in a Frequency-Encoded Photonic Chern Insulator

https://link.aps.org/doi/10.1103/2dyh-yhrb
2•westurner•56m ago•0 comments

Qcut – Free browser video editor (no install, no signup)

https://qcut.app/
3•martin_renaud•1h ago•2 comments
Open in hackernews

Show HN: OpenEvolve – open-source implementation of DeepMind's AlphaEvolve

8•codelion•9mo ago
I've built an open-source implementation of Google DeepMind's AlphaEvolve system called OpenEvolve. It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms through iterative evolution.

Try it out: https://github.com/codelion/openevolve

What is this?

OpenEvolve evolves entire codebases (not just single functions) by leveraging an ensemble of LLMs combined with automated evaluation. It follows the evolutionary approach described in the AlphaEvolve paper but is fully open source and configurable.

I built this because I wanted to experiment with evolutionary code generation and see if I could replicate DeepMind's results. The original system successfully improved Google's data centers and found new mathematical algorithms, but no implementation was released.

How it works:

The system has four main components that work together in an evolutionary loop:

1. Program Database: Stores programs and their metrics in a MAP-Elites inspired structure

2. Prompt Sampler: Creates context-rich prompts with past solutions

3. LLM Ensemble: Generates code modifications using multiple models

4. Evaluator Pool: Tests programs and provides feedback metrics

What you can do with it:

- Run existing examples to see evolution in action

- Define your own problems with custom evaluation functions

- Configure LLM backends (works with any OpenAI-compatible API)

- Use multiple LLMs in ensemble for better results

- Optimize algorithms with multiple objectives

Two examples I've replicated from the AlphaEvolve paper:

- Circle Packing: Evolved from simple geometric patterns to sophisticated mathematical optimization, reaching 99.97% of DeepMind's reported results (2.634 vs 2.635 sum of radii for n=26).

- Function Minimization: Transformed a random search into a complete simulated annealing algorithm with cooling schedules and adaptive step sizes.

Technical insights:

- Low latency LLMs are critical for rapid generation cycles

- Best results using Gemini-Flash-2.0-lite + Gemini-Flash-2.0 as the ensemble

- For the circle packing problem, Gemini-Flash-2.0 + Claude-Sonnet-3.7 performed best

- Cerebras AI's API provided the fastest inference speeds

- Two-phase approach (exploration then exploitation) worked best for complex problems

Getting started (takes < 2 minutes)

# Clone and install

git clone https://github.com/codelion/openevolve.git

cd openevolve

pip install -e .

# Run the function minimization example

python openevolve-run.py

examples/function_minimization/initial_program.py \

  examples/function_minimization/evaluator.py \

  --config examples/function_minimization/config.yaml \

  --iterations 50
All you need is Python 3.9+ and an API key for an LLM service. Configuration is done through simple YAML files.

I'll be around to answer questions and discuss!

Comments

codelion•9mo ago
I actually managed to replicate the new SOTA for circle packing in unit squares as found in the alphaevole paper - 2.635 for 26 circles in a unit square. Took about 800 iterations to find the best program which itself uses an optimisation phase and running it lead to the optimal packaging in one of its runs.
helsinki•9mo ago
How many tokens did it take to generate the 800 versions of the code?
codelion•9mo ago
Checked my openrouter stats, it took ~3M tokens but that involved quite a few runs of various experiments.