There were two types: selfish ducks that kept food, and altruistic ducks that shared.
In the initial runs, selfish ducks dominated. But when I added a simple memory mechanism – allowing ducks to remember who helped them – cooperation suddenly became stable when resources were scarce.
That small experiment kept bothering me. What would happen if the environment itself became much more complex?
So over the years I kept extending the simulator.
First it became a planetary environment engine: https://coyoteke.com/GameEngineForPlanets/
Then a sandbox for generating different worlds: https://coyoteke.com/GameEngineForPlanets/genWorld.html
Once ecosystems appeared, I needed a way to design digital genomes: https://coyoteke.com/GameEngineForPlanets/genomix/
While running these ecological simulations I started noticing something unexpected: the encoding of traits (binary vs Gray code) significantly changes evolutionary dynamics under higher mutation pressure.
To test that observation, I eventually ran large GPU simulations and wrote it up as a research paper.
But during that process I realized the more interesting artifact might actually be the simulation environment itself.
So I turned the system into an open benchmark sandbox:
BiomeSyn https://biomesyn.com/
Most AI benchmarks evaluate agents on short, static tasks.
BiomeSyn explores a different question:
Can an agent survive and keep adapting in a continuously evolving world?
Looking back, the whole system slowly evolved from that tiny duck experiment ten years ago.
yangkecoy•1h ago
A few implementation notes people might be curious about:
• The environments simulate evolving ecosystems rather than fixed tasks.
• Genomes currently use integer-encoded traits with configurable mutation operators.
• I ran large GPU experiments while studying how encoding (binary vs Gray) affects evolutionary stability under mutation pressure.
• The system eventually became a sandbox for long-horizon adaptation experiments.
I’m particularly interested in feedback on mutation models and how people might plug in learning agents.