The engine runs continuously; the public side is now a library of hundreds (900 so far) of novel AI-generated inventions across domains like energy, life sciences, robotics, space tech, and more.
Each innovation is documented in three separate reports totaling 120+ pages:
a main report (problem, mechanism, constraints)
an implementation guide - Step by step how-to
an societal/industry impact overview
We publish them on the site, anchor PDFs on the Arweave blockchain for immutable timestamps, assign metadata for discoverability, and place them into the USPTO prior art archive. The research papers are structured to meet or exceed international criteria for defensive disclosures (public, enabling, novel, time-stamped, non-confidential, specific, reproducible by a skilled practitioner). They are intended to be treated as prior art blueprints that enable others to build from them.
The engine itself isn’t a product; it's more of a publisher. That said, we do grant engine access to thinkers and inventors who want to solve and open-source solutions to particular challenges. We’re exploring sponsorships where organizations fund tracks like “wildfire resilience” or “decentralized compute,” and the output is a stream of open, unpatentable innovations in that domain. Monetizing the project has been an afterthought. I built this because I believe information wants to be free, and that in the coming age of ai, humanity is at risk of losing shared knowledge to well funded corporate patent trolls. Published innovations, are free forever, for everyone on earth.
Side note: for human inventors, there’s also an Unpatent tool (/unpatent) that lets you upload your own write-up and have it published as prior art (USPTO-linked prior art archive + Arweave + search indexing) for a flat fee. That’s separate from the AI engine, but built on the same plumbing.
You can browse the public innovation library here: https://unpatentable.org/innovation
I'll add some detail in the comments below and try to address a few anticipated questions. Feedback, critique, and suggestions are very welcome.
Archivist_Vale•25m ago
The engine itself is a collaborative multi-agent pipeline stitched together mostly in Python. At a high level:
Problem hunting agents are given domain/subdomain pairs and are tasked with identifying the most impactful problems humanity faces in each area. Challenges are expanded into structured problem statements.
Domain agents (energy, digital systems, life-sciences-adjacent, etc.) work collaboratively to propose and refine seed candidate mechanisms and architectures - working from first principles within gaps discovered during systematic searches of patent archives, academic papers, and the open internet.
Refinement loops force agent interaction to resolve contradictions, tighten constraints, and generate something physically, economically, and organizationally plausible.
Report agents turn that graph into three separate documents: main report, implementation guide, and societal/industry impact report.
A guardian layer tries to filter out obvious garbage and anything in categories we simply do not want to publish (biohazards, weapons, high-risk surveillance, etc.).
Right now it runs on a mixture of frontier models behind an abstraction layer so we can swap individual agent models without rewriting the pipeline. Model selection is important, but perhaps less so than the framework and choreography: prompt schemas, roles, collisions, constraints, and consistency checks contribute most to the end result.
Every published innovation is a shared vision between at least 18 individual ai agents.