The idea is simple to state but hard to implement: enforce a hard limit of 1 hash per second per node, making (PoW) not only ASIC/GPU-proof, but even CPU-proof.
To make this concrete, I’ve included short demo videos that show: (1) mining attempts being rejected when a node exceeds 1 hash/sec: https://youtu.be/i5gzzqFXXUk (2) a visual representation of mining during live calls: https://youtu.be/gPz7d3lGz1k
You can also try the MVP local client here: https://grahambell.io/mvp/Proof_of_Witness.html
If you start mining and increase the mining speed, attempts and blocks start getting rejected in real time. The goal is to bring back the fairness of early 2009 solo mining, except enforced by the network, not hardware scarcity. (Twist: in this local client, imagine it happening during an active audio/video calling session).
Why this matters? Today, PoW only validates the final result. It has no visibility into how or how fast the result was produced. That’s what enables parallel mining, mining pools, and hardware dominance through capital advantage. In practice, participation now often requires six-figure investments, whether through staking, hardware, electricity, or maintenance (operational overheads). The majority simply cannot afford to participate meaningfully, which has pushed blockchains toward increasing centralisation.
An analogy I’ve found useful: If Proof-of-Work is a math exam, then today miners solve it at home and only submit the question/answer sheet. No one sees the working, the timing, or whether multiple calculators were used.
But what if miners also had to submit the calculations sheet and solve the exam under decentralised invigilation, in real time, and under exam conditions and rules, in an exam hall?
The core idea: This local client adds an external, decentralised observer layer outside the miner’s control, which ensures miners follow the mining rules, allowing the network to verify process (the “how”), not just outcome. As a result, mining attempts become externally visible (exposed) and rate-limited.
This MVP naturally caps mining to 1 hash/sec per node, because every attempt is observable and immediately detected and rejected if too fast.
“Can’t I just propose blocks without being observed?” Without being observed and following the mining rules, proposed blocks even if valid, continue to get rejected unless they are signed by observers via consensus. This forces miners to operate under observation while following rules. Think of observer’s signatures as a verification stamp. Without it, blocks are ignored.
“Can’t I just spin up a million miners?” That’s the obvious and correct question. You can add more nodes, but registering them is intentionally difficult, costly, and probabilistic. Think of it like repeatedly playing a changing random number guessing game while competing with others. Everyone races to guess the same number, which changes with a correct guess that registers a node. Each attempt is costly and rate limited (e.g., 1 attempt per second per participant). This makes parallel mining possible, but expensive, slow, and observable rather than free.
This isn’t a polished startup. It’s an MVP that challenges an assumption I used to believe myself.
If you’re technical, curious, and interested I’d love to discuss it further.
I'm also looking for the first group of testers to help stress-test the P2P version when it comes out. If you want to run one of the early nodes, I've put a participation list here: https://grahambell.io/mvp/#waitlist
More context: https://grahambell.io