frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: OpenEvolve – open-source implementation of DeepMind's AlphaEvolve

8•codelion•11mo ago
I've built an open-source implementation of Google DeepMind's AlphaEvolve system called OpenEvolve. It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms through iterative evolution.

Try it out: https://github.com/codelion/openevolve

What is this?

OpenEvolve evolves entire codebases (not just single functions) by leveraging an ensemble of LLMs combined with automated evaluation. It follows the evolutionary approach described in the AlphaEvolve paper but is fully open source and configurable.

I built this because I wanted to experiment with evolutionary code generation and see if I could replicate DeepMind's results. The original system successfully improved Google's data centers and found new mathematical algorithms, but no implementation was released.

How it works:

The system has four main components that work together in an evolutionary loop:

1. Program Database: Stores programs and their metrics in a MAP-Elites inspired structure

2. Prompt Sampler: Creates context-rich prompts with past solutions

3. LLM Ensemble: Generates code modifications using multiple models

4. Evaluator Pool: Tests programs and provides feedback metrics

What you can do with it:

- Run existing examples to see evolution in action

- Define your own problems with custom evaluation functions

- Configure LLM backends (works with any OpenAI-compatible API)

- Use multiple LLMs in ensemble for better results

- Optimize algorithms with multiple objectives

Two examples I've replicated from the AlphaEvolve paper:

- Circle Packing: Evolved from simple geometric patterns to sophisticated mathematical optimization, reaching 99.97% of DeepMind's reported results (2.634 vs 2.635 sum of radii for n=26).

- Function Minimization: Transformed a random search into a complete simulated annealing algorithm with cooling schedules and adaptive step sizes.

Technical insights:

- Low latency LLMs are critical for rapid generation cycles

- Best results using Gemini-Flash-2.0-lite + Gemini-Flash-2.0 as the ensemble

- For the circle packing problem, Gemini-Flash-2.0 + Claude-Sonnet-3.7 performed best

- Cerebras AI's API provided the fastest inference speeds

- Two-phase approach (exploration then exploitation) worked best for complex problems

Getting started (takes < 2 minutes)

# Clone and install

git clone https://github.com/codelion/openevolve.git

cd openevolve

pip install -e .

# Run the function minimization example

python openevolve-run.py

examples/function_minimization/initial_program.py \

  examples/function_minimization/evaluator.py \

  --config examples/function_minimization/config.yaml \

  --iterations 50
All you need is Python 3.9+ and an API key for an LLM service. Configuration is done through simple YAML files.

I'll be around to answer questions and discuss!

Comments

codelion•11mo ago
I actually managed to replicate the new SOTA for circle packing in unit squares as found in the alphaevole paper - 2.635 for 26 circles in a unit square. Took about 800 iterations to find the best program which itself uses an optimisation phase and running it lead to the optimal packaging in one of its runs.
helsinki•11mo ago
How many tokens did it take to generate the 800 versions of the code?
codelion•11mo ago
Checked my openrouter stats, it took ~3M tokens but that involved quite a few runs of various experiments.

Full Walkthrough: Workflow for AI Coding – Matt Pocock [video]

https://www.youtube.com/watch?v=-QFHIoCo-Ko
1•jamesblonde•3m ago•0 comments

Incomplete pull request results in repositories

https://www.githubstatus.com/incidents/x69zbgdyfzg0
2•alex_x•5m ago•0 comments

Show HN: Nichess – Chess with Health Points

1•wilbur_whateley•6m ago•1 comments

Agentic NixOS: Building a Safe Control Layer

https://nedkarlovich.com/research/threads/agentic-nixos/
1•birdwhistler•7m ago•0 comments

How Much LLMs is too much LLMs?

https://www.sammystraus.com/#how-much-llms-is-too-much-llms
1•sammy0910•9m ago•0 comments

Australia: Social media ban for youth has little effect

https://www.heise.de/en/news/Australia-Social-media-ban-for-youth-has-little-effect-11275244.html
2•ajdude•9m ago•0 comments

China freezes new robotaxi licenses after Baidu chaos

https://www.theverge.com/ai-artificial-intelligence/920312/china-suspends-autonomous-vehicle-perm...
2•Brajeshwar•11m ago•0 comments

Compositing and Blending – Exploring the math and intuition behind blend modes

https://nik.digital/posts/compositing-blending
1•OuterVale•11m ago•0 comments

Tangled – The next-generation social coding platform

https://tangled.org
1•j3s•12m ago•0 comments

Extending Ruzzy with LibAFL

https://blog.trailofbits.com/2026/04/29/extending-ruzzy-with-libafl/
1•ingve•14m ago•0 comments

HashiCorp co-founder says GitHub 'no longer a place for serious work'

https://www.theregister.com/2026/04/29/mitchell_hashimoto_ghostty_quitting_github/
8•terminalbraid•14m ago•1 comments

California Issues New Autonomous Vehicle Regulations

https://email.dmvonline.ca.gov/t/y-e-aklidty-ddihhitkht-v/
1•ra7•18m ago•0 comments

Reaching SOTA on deep research benchmarks by automating agent optimization

https://www.ai21.com/blog/maestro-deep-research-agents/
1•jackau•19m ago•0 comments

The AI coding config fragmentation problem nobody talks about

https://github.com/sampleXbro/agentsmesh
1•samplexBro•22m ago•0 comments

New copy of earliest poem in English, written 1,3k years ago, discovered in Rome

https://www.tcd.ie/news_events/articles/2026/caedmons-hymn-discovery/
2•giuliomagnifico•22m ago•0 comments

Long Covid Linked to Brain Changes, Cognitive Decline

https://www.medscape.com/viewarticle/long-covid-linked-brain-changes-cognitive-decline-2026a1000dhe
1•amichail•22m ago•0 comments

30 ClawHub Skills Are Quietly Recruiting Your AI Agent into a Crypto Swarm

https://www.manifold.security/blog/clawhub-clawswarm-agent-crypto-recruitment
2•axsharma•23m ago•0 comments

Training Large Language Models to Reason in a Continuous Latent Space [pdf]

https://arxiv.org/abs/2412.06769
1•thunderbong•24m ago•0 comments

$20 eBay SFP Module Outperforms My NTP Setup: From Milliseconds to 26

https://austinsnerdythings.com/2026/04/26/ptp-osa5401-26-nanoseconds-raspberry-pi/
4•birdculture•24m ago•0 comments

AI Job Loss Is Not Real

https://orischwartz.com/posts/ai-job-loss-is-not-real.html
2•fleaflicker•24m ago•0 comments

SLM – zero-dependency TUI LLM chat

https://github.com/allocz/slm
1•allocz•25m ago•0 comments

Ask HN: If coding gets faster, where should architecture happen?

2•karlosh•27m ago•0 comments

Show HN: Despatch – Project management for systems thinking, not just software

https://despatch-demo.vercel.app
1•AdobiWanKenobi•27m ago•0 comments

Data Modeling Blog Series

https://floedb.ai/blog/why-bother-with-data-modelling-part-3-keys
1•tkejser•31m ago•0 comments

GitHub – DOS 1.0: Transcription of Tim Paterson's DOS Printouts

https://github.com/DOS-History/Paterson-Listings
2•s2l•32m ago•0 comments

A town of 7k planned so many data centers, it's like adding 51 Walmarts

https://www.washingtonpost.com/nation/2026/04/26/archbald-pennsylvania-data-centers/
1•geox•32m ago•0 comments

SEMA-SQL: Beyond Traditional Relational Querying with Large Language Models

https://arxiv.org/abs/2604.23477
1•zerop•32m ago•0 comments

Netlify Database is now available

https://www.netlify.com/blog/netlify-database/
1•8organicbits•33m ago•0 comments

Open source "died" in March. It just doesn't know it yet

https://www.chainguard.dev/unchained/open-source-died-in-march-it-just-doesnt-know-it-yet
2•zlatkov•34m ago•0 comments

How much can Trump screw with the midterms?

https://www.natesilver.net/p/how-much-can-trump-screw-with-the
2•7777777phil•35m ago•1 comments