The Problem: Traditional algorithms operate on 1D streams. They are blind to the spatial, temporal, or hierarchical architecture of modern data (3D models, nested JSON, multi-modal blobs). When you force these structures into a 1D window, you leave significant entropy on the table.
The Solution: PXG v3.0 maps binary data to a 13505x13505 grayscale grid. By treating data as a coordinate system rather than a line, our engine utilizes adaptive tiling to isolate redundant blocks across the entire file, not just a sliding window.
Key Benchmarks:
Efficiency: Consistent 60-88% reduction on JSON, PDF, and FBX/3D vertex data.
Searchability: This is an in-place succinct structure. The data is the index. You can query strings, hex patterns, or metadata directly from the compressed binary.
Latency: Seek-latency is clocked at ~0.04ms. Search speed for a complex pose library (225 chunks) is ~6.00ms.
Verification: 100% Byte-Perfect Lossless reconstruction.
The Ecosystem: PXG isn't just a utility; it's designed as the foundation for the AXIS ecosystem—enabling zero-load-time gaming and high-density "Heavy Data" pipelines.
The web demo is live at the URL above. No signups, no trackers. Upload a file (Max 50MB for the demo), download the .pxg, and use the Debug Index to search the compressed results in real-time.
I’ll be here to answer technical questions regarding the tiling logic..
truth_behold•1d ago
The core of PXG v3.0 treats the bitstream as a 2D topology. Traditional LZ-based methods rely on a sliding dictionary window (e.g., 128MB in Zstd), which works for local redundancy but fails to capture global spatial patterns in complex files like 3D vertex clouds or deep JSON hierarchies.
On Searchability: We achieve the ~0.04ms seek time by using an in-memory succinct data structure. Essentially, during the "Decomposition" phase, we build a bit-vector index that remains compressed. This allows us to perform Rank/Select operations directly on the tiles without decompressing the surrounding blocks.
The 13505 Grid: The grid resolution was chosen to optimize cache-locality on modern x86/ARM architectures. Each 4KB tile is designed to fit within the L1 cache during quantization, minimizing the I/O bottleneck that usually kills high-ratio compressors.
I’ve kept the demo strictly browser-side for the initial pass—I want people to see the byte-perfect reconstruction for themselves. I’m especially curious to see how the engine handles edge cases in your specific datasets.
billconan•1d ago
Restricted Access Developer tools are disabled on this platform to maintain system integrity.
If you believe this is an error, please contact your administrator or return home.
truth_behold•1d ago