*Demo:* https://mnur.me/picunic/
*GitHub:* https://github.com/mohammed-nurulhoque/picunic
*How it works:* - Splits images into 8×16 pixel chunks (matching terminal cell aspect ratio) - Runs each chunk through a CNN encoder to get a 64-dim embedding - Finds the Unicode character with the most similar embedding (cosine similarity) - The CNN was trained on ~2000 Unicode characters rendered in DejaVu Sans Mono
Everything runs client-side via WebAssembly - no server needed. Features include adjustable width, dithering for photos, and ASCII-only mode.
Built with Rust (compiled to WASM), ONNX Runtime Web, and vanilla JavaScript. The original terminal version is also available in the repo.
Currently works best for images with clear dark/light distinction. Would love feedback on the improving quality!