frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The case against bearer secrets

https://credctl.com/blog/vercel-context-ai-breach/
1•matzhouse•11s ago•0 comments

OpenAI's WebRTC Problem

https://moq.dev/blog/webrtc-is-the-problem/
1•atgctg•15s ago•0 comments

Tesla's 4680 battery cells are underperforming and frustrating buyers

https://electrek.co/2026/05/07/tesla-4680-battery-cell-performance-data-shows-cant-build-own-cells/
1•Klaster_1•1m ago•0 comments

Show HN: Airlock – self-upgrading compiled AI agents

https://github.com/airlockrun/airlock/
1•cyberteaborg•1m ago•0 comments

The balcony solar boom is coming to the US

https://www.technologyreview.com/2026/05/07/1136933/balcony-solar-boom/
1•Brajeshwar•1m ago•0 comments

Blink – AI Assistant

https://blink-oi.vercel.app
1•Pascal1997•2m ago•0 comments

Macintosh Common Lisp

https://en.wikipedia.org/wiki/Macintosh_Common_Lisp
1•tosh•2m ago•0 comments

Upgraded wave-energy modeling tools could lead to more robust, seaworthy devices

https://techxplore.com/news/2026-05-energy-tools-robust-seaworthy-devices.html
1•PaulHoule•3m ago•0 comments

ZAYA1-8B: Frontier intelligence density, trained on AMD

https://www.zyphra.com/post/zaya1-8b
1•cmitsakis•4m ago•0 comments

DARA – Compiled Memory for Any AI. No Cloud. Just Markdown and Python

https://eidara.dev/
1•jrotllant•4m ago•0 comments

I Still Can't Trust AI

https://www.clintmcmahon.com/Blog/i-still-cant-trust-ai
2•speckx•4m ago•0 comments

Lessons from 6 Years of Local Advocacy

https://maxmautner.com/2026/04/30/what-i-have-learned.html
1•jez•5m ago•0 comments

Pg_flight_recorder – server-side flight recorder for Postgres

https://github.com/dventimisupabase/pg_flight_recorder
1•samokhvalov•5m ago•0 comments

"Super-Spreaders" and Person-to-Person Transmission of Andes Virus in Argentina

https://www.nejm.org/doi/full/10.1056/NEJMoa2009040
1•themgt•6m ago•0 comments

California leaders report four to six weeks worth of gasoline and diesel supply

https://kmph.com/news/local/california-leaders-report-four-to-six-weeks-worth-of-gasoline-and-die...
2•cdrnsf•6m ago•0 comments

Formal Analysis of the Remote Agent Before and After Flight (2000) [pdf]

https://ntrs.nasa.gov/api/citations/20000055731/downloads/20000055731.pdf
1•tosh•6m ago•0 comments

Single dose of magic mushroom psychedelic can cause anatomical brain changes

https://www.theguardian.com/science/2026/may/05/magic-mushrooms-psychedelic-changes-brain-anatomy...
1•helterskelter•7m ago•0 comments

SandboxVM – A Soft Sandbox for AmigaOS 4

https://github.com/derfsss/SandboxVM
1•doener•9m ago•0 comments

Hyperscaler earnings are driven by ownership markups of private AI companies

https://www.ft.com/content/be97df0a-76b1-4cb0-9ba4-d1117d8d1450
1•marojejian•9m ago•1 comments

I bought an analog watch and I love it

https://minimal.bearblog.dev/i-bought-an-analog-watch-and-i-love-it/
2•pastel5•9m ago•0 comments

A Model Context Protocol server for driving AmigaOS 4.1 machines

https://github.com/derfsss/MCP-AmigaOS4
1•doener•9m ago•0 comments

Milestone 1.0.0 Release of APK Downloader apkeep Powers Research on Android Apps

https://www.eff.org/deeplinks/2026/05/milestone-100-release-apk-downloader-apkeep-powers-research...
1•hn_acker•10m ago•0 comments

We reclaimed 100 CPU cores by migrating kube-proxy from IPVS to nftables

https://p10a.pl/posts/nftables/
1•p10a•10m ago•0 comments

How to find companies using AWS that want to save costs?

1•Poomba•10m ago•1 comments

Cleaner QML Controller Wiring with Singleton Instances in Qt 6.12

https://www.kdab.com/singleton-controllers-in-times-of-declarative-qml/
1•jandeboevrie•11m ago•0 comments

A Kindergarten Teacher Attempts to Explain Cryptocurrency

https://www.mcsweeneys.net/articles/a-kindergarten-teacher-attempts-to-explain-cryptocurrency-and...
1•PaulDavisThe1st•11m ago•0 comments

Amaranth hardware definition language simulator

https://amaranth-lang.org/play/
1•gregsadetsky•11m ago•0 comments

Bubbles Are Really Evil

https://pluralistic.net/2026/05/07/dump-the-pumpers/
1•hn_acker•11m ago•0 comments

Alpha Carving

https://janosmeny.com/blog/alpha-carving-on-gpu/index.html
1•janos95•12m ago•1 comments

New Paper: Unified Wavefunction and the Standard Model

https://zenodo.org/records/20072424
1•neuy•12m ago•0 comments
Open in hackernews

Show HN: OpenEvolve – open-source implementation of DeepMind's AlphaEvolve

8•codelion•11mo ago
I've built an open-source implementation of Google DeepMind's AlphaEvolve system called OpenEvolve. It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms through iterative evolution.

Try it out: https://github.com/codelion/openevolve

What is this?

OpenEvolve evolves entire codebases (not just single functions) by leveraging an ensemble of LLMs combined with automated evaluation. It follows the evolutionary approach described in the AlphaEvolve paper but is fully open source and configurable.

I built this because I wanted to experiment with evolutionary code generation and see if I could replicate DeepMind's results. The original system successfully improved Google's data centers and found new mathematical algorithms, but no implementation was released.

How it works:

The system has four main components that work together in an evolutionary loop:

1. Program Database: Stores programs and their metrics in a MAP-Elites inspired structure

2. Prompt Sampler: Creates context-rich prompts with past solutions

3. LLM Ensemble: Generates code modifications using multiple models

4. Evaluator Pool: Tests programs and provides feedback metrics

What you can do with it:

- Run existing examples to see evolution in action

- Define your own problems with custom evaluation functions

- Configure LLM backends (works with any OpenAI-compatible API)

- Use multiple LLMs in ensemble for better results

- Optimize algorithms with multiple objectives

Two examples I've replicated from the AlphaEvolve paper:

- Circle Packing: Evolved from simple geometric patterns to sophisticated mathematical optimization, reaching 99.97% of DeepMind's reported results (2.634 vs 2.635 sum of radii for n=26).

- Function Minimization: Transformed a random search into a complete simulated annealing algorithm with cooling schedules and adaptive step sizes.

Technical insights:

- Low latency LLMs are critical for rapid generation cycles

- Best results using Gemini-Flash-2.0-lite + Gemini-Flash-2.0 as the ensemble

- For the circle packing problem, Gemini-Flash-2.0 + Claude-Sonnet-3.7 performed best

- Cerebras AI's API provided the fastest inference speeds

- Two-phase approach (exploration then exploitation) worked best for complex problems

Getting started (takes < 2 minutes)

# Clone and install

git clone https://github.com/codelion/openevolve.git

cd openevolve

pip install -e .

# Run the function minimization example

python openevolve-run.py

examples/function_minimization/initial_program.py \

  examples/function_minimization/evaluator.py \

  --config examples/function_minimization/config.yaml \

  --iterations 50
All you need is Python 3.9+ and an API key for an LLM service. Configuration is done through simple YAML files.

I'll be around to answer questions and discuss!

Comments

codelion•11mo ago
I actually managed to replicate the new SOTA for circle packing in unit squares as found in the alphaevole paper - 2.635 for 26 circles in a unit square. Took about 800 iterations to find the best program which itself uses an optimisation phase and running it lead to the optimal packaging in one of its runs.
helsinki•11mo ago
How many tokens did it take to generate the 800 versions of the code?
codelion•11mo ago
Checked my openrouter stats, it took ~3M tokens but that involved quite a few runs of various experiments.