frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Silver price hits fresh record above $112, up 45% in January

https://ts2.tech/en/silver-price-hits-fresh-record-above-112-as-haven-demand-bites-slv-jumps/
1•JumpinJack_Cash•1m ago•0 comments

1931 Ford Model a Hot Rod Bridges Generations with a Civic Type R Engine

https://www.thedrive.com/news/this-1931-ford-model-a-hot-rod-bridges-a-generational-gap-with-a-ci...
1•PaulHoule•5m ago•0 comments

AI Lazyslop and Personal Responsibility

https://danielsada.tech/blog/ai-lazyslop-and-personal-responsibility/
2•dshacker•5m ago•0 comments

When do most people have the day off?

https://www.not-ship.com/when-do-most-people-have-the-day-off/
1•speckx•10m ago•0 comments

Graviton: Create Complex Generative AI pipelines, auto-export as API

https://github.com/jaskirat05/Graviton
1•jaskirat05•12m ago•0 comments

How could Claude Code ever justify "a small game engine" (technical deepdive)

https://clifford.ressel.fyi/blog/drawing-monospace-text/
3•csressel•12m ago•1 comments

No US-Style AI Investment Boom to Drive EU Growth

https://www.oxfordeconomics.com/resource/no-us-style-ai-investment-boom-to-drive-eu-growth/
1•jandrewrogers•13m ago•0 comments

Show HN: PolyMCP Skills – Scalable Tool Organization for MCP-Based AI Agents

1•justvugg•15m ago•0 comments

Continuous Autoregressive Language Models (Calm): A New LLM Architecture [video]

https://www.youtube.com/watch?v=DDowKmd4qe4
1•znpy•18m ago•0 comments

The Three Projections of Doctor Futamura (2009)

http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html
2•measurablefunc•18m ago•0 comments

Gold Price Tops $5k for First Time

https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-01-26-2026
1•bookofjoe•21m ago•1 comments

Notes on Not Posting

https://www.workingtheorys.com/p/notes-on-not-posting
2•imartin2k•22m ago•0 comments

Design System: the art of documented compromise

https://medium.com/doctolib/design-system-the-art-of-documented-compromise-04a7a5fab937
1•rognjen•23m ago•0 comments

Timeline of Diffusion Language Models

https://github.com/VILA-Lab/Awesome-DLMs
1•tilt•24m ago•0 comments

Silver Thursday

https://en.wikipedia.org/wiki/Silver_Thursday
1•ValentineC•24m ago•0 comments

Show HN: Minima – Open-source micro-learning LMS (alternative to Moodle)

https://github.com/cobel1024/minima
1•pigon1002•26m ago•0 comments

The Home Computer Hybrids: Atari, TI, and the FCC

https://technicshistory.com/2026/01/25/the-home-computer-hybrids/
2•cfmcdonald•26m ago•0 comments

Show HN: FilaMeter – Local-first filament inventory management for 3D printing

https://filameter.com/
1•ldrrp•26m ago•0 comments

Show HN: VLM Inference Engine in Rust

https://mixpeek.com/blog/building-a-production-ready-vlm-inference-server-in-rust
1•Beefin•26m ago•0 comments

Browsh the modern text-based browser

https://www.brow.sh/docs/installation/
1•ungawatkt•27m ago•0 comments

Home Lab Developments

https://zitseng.com/archives/25229
2•todsacerdoti•29m ago•0 comments

Show HN: Poast – Publish Quickly from Claude, Cursor, ChatGPT

https://www.poast.sh/post/acb2475e-7871-4f62-9f25-3e60d38861d4
1•k0mplex•30m ago•0 comments

Show HN: ScaleLighthouse – Bulk Lighthouse, Playwright smoke tests, CrUX metrics

https://github.com/acenji/lighthouse
1•acenji•31m ago•0 comments

The WABL Test: Would anything of value be lost if you delete this?

https://www.gkogan.co/would-anything-of-value-be-lost/
1•gk1•33m ago•0 comments

The "Bucket Bumping" problem of airline tickets

https://www.dodgycoder.net/2026/01/the-bucket-bumping-problem-of-airline-tickets.html
1•abnercoimbre•34m ago•1 comments

Tesla FSD vs. Snow Ice Emergency Avoidance Braking Lane Perception

https://www.youtube.com/watch?v=6nwhbIOipXQ
1•hnburnsy•34m ago•0 comments

What Are the Greatest Sequels of All Time? A Statistical Analysis (2025)

https://www.statsignificant.com/p/what-are-the-greatest-sequels-of
1•speckx•36m ago•2 comments

The Underground Node Network

https://github.com/mevdschee/underground-node-network/blob/main/README.md
3•insom•37m ago•0 comments

How animators and AI researchers made 'Dear Upstairs Neighbors'

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/dear-upstairs-neighbors/
2•saikatsg•38m ago•0 comments

Dithering – Part 2: The Ordered Dithering

https://visualrambling.space/dithering-part-2/
4•ChrisArchitect•38m ago•2 comments
Open in hackernews

Show HN: OpenEvolve – open-source implementation of DeepMind's AlphaEvolve

8•codelion•8mo ago
I've built an open-source implementation of Google DeepMind's AlphaEvolve system called OpenEvolve. It's an evolutionary coding agent that uses LLMs to discover and optimize algorithms through iterative evolution.

Try it out: https://github.com/codelion/openevolve

What is this?

OpenEvolve evolves entire codebases (not just single functions) by leveraging an ensemble of LLMs combined with automated evaluation. It follows the evolutionary approach described in the AlphaEvolve paper but is fully open source and configurable.

I built this because I wanted to experiment with evolutionary code generation and see if I could replicate DeepMind's results. The original system successfully improved Google's data centers and found new mathematical algorithms, but no implementation was released.

How it works:

The system has four main components that work together in an evolutionary loop:

1. Program Database: Stores programs and their metrics in a MAP-Elites inspired structure

2. Prompt Sampler: Creates context-rich prompts with past solutions

3. LLM Ensemble: Generates code modifications using multiple models

4. Evaluator Pool: Tests programs and provides feedback metrics

What you can do with it:

- Run existing examples to see evolution in action

- Define your own problems with custom evaluation functions

- Configure LLM backends (works with any OpenAI-compatible API)

- Use multiple LLMs in ensemble for better results

- Optimize algorithms with multiple objectives

Two examples I've replicated from the AlphaEvolve paper:

- Circle Packing: Evolved from simple geometric patterns to sophisticated mathematical optimization, reaching 99.97% of DeepMind's reported results (2.634 vs 2.635 sum of radii for n=26).

- Function Minimization: Transformed a random search into a complete simulated annealing algorithm with cooling schedules and adaptive step sizes.

Technical insights:

- Low latency LLMs are critical for rapid generation cycles

- Best results using Gemini-Flash-2.0-lite + Gemini-Flash-2.0 as the ensemble

- For the circle packing problem, Gemini-Flash-2.0 + Claude-Sonnet-3.7 performed best

- Cerebras AI's API provided the fastest inference speeds

- Two-phase approach (exploration then exploitation) worked best for complex problems

Getting started (takes < 2 minutes)

# Clone and install

git clone https://github.com/codelion/openevolve.git

cd openevolve

pip install -e .

# Run the function minimization example

python openevolve-run.py

examples/function_minimization/initial_program.py \

  examples/function_minimization/evaluator.py \

  --config examples/function_minimization/config.yaml \

  --iterations 50
All you need is Python 3.9+ and an API key for an LLM service. Configuration is done through simple YAML files.

I'll be around to answer questions and discuss!

Comments

codelion•8mo ago
I actually managed to replicate the new SOTA for circle packing in unit squares as found in the alphaevole paper - 2.635 for 26 circles in a unit square. Took about 800 iterations to find the best program which itself uses an optimisation phase and running it lead to the optimal packaging in one of its runs.
helsinki•8mo ago
How many tokens did it take to generate the 800 versions of the code?
codelion•8mo ago
Checked my openrouter stats, it took ~3M tokens but that involved quite a few runs of various experiments.