Modern CMOS sensors produce a lot of grainy, unstable pixels—even when pointed at a static scene. You can film the same white wall at 30 fps and never get an identical frame. That felt like a decent physical entropy source, so I built a TRNG around it and called it Aegis Optikon(.com).
The pipeline right now looks like this:
Capture frames from a camera (client devices)
Extract “noisy” pixel data
Whiten and compress it
Send entropy packets to the server
Mix multiple streams together
Extract with BLAKE3
Attach timestamps and hashes per packet for basic verifiability
I’m not a seasoned backend/security engineer—I leaned heavily on AI tools (Copilot/DeepSeek) to get the code into shape, then iterated until it worked end‑to‑end. It’s running on a Contabo VPS, with a simple HTML/session‑based frontend and username+email login.What I’d really like from HN:
Feedback on the entropy model (image sensor noise as a source)
Thoughts on the extraction/mixing pipeline
How to think about capacity: how much entropy can I safely serve, and how does this scale?
Security concerns with the current architecture (e.g., inlined HTML/CSS/JS, API design, threat model)
Any obvious “don’t do this” mistakes I’m making
Right now only the free tier is active: 8/16/32/64‑byte requests, up to 1 MB/month per user. Paid tiers are placeholders until I’m confident in the design and implementation.I’d really appreciate brutally honest feedback—especially from people who’ve worked on RNGs, cryptography, or high‑assurance systems. If you see something fundamentally flawed, I’d rather hear it now.