I’m one of the maintainers of Bridge Anonymization. We built this because the existing solutions for translating sensitive user content are insufficient for many of our privacy-concious clients (Governments, Banks, Healthcare, etc.).
We couldn't send PII to third-party APIs, but standard redaction destroyed the translation quality. If you scrub "John" to "[PERSON]", the translation engine loses gender context (often defaulting to masculine), which breaks grammatical agreement in languages like French or German.
So we built a reversible, local-first pipeline for Node.js/Bun. Here is how we implemented the tricky parts:
0. The Mapping
We use XML-like tags with ID’s that uniquely identify the PII, `<PII type=”PERSON” id=”1”>`. Translation models and the systems around them work with XML data structures since the dawn of Computer Aided Translation tools, so this improves compatibility with existing workflows and systems. A `PIIMap` is stored locally for rehydration after translation (AES-256-GCM-encrypted by default).
1. Hybrid Detection Engine
Obviously neither Regex nor NER was enough on its own.
- Structured PII: We use strict Regex with validation checksums for things like IBANs (Mod-97) and Credit Cards (Luhn). - Soft PII: For names and locations, we run a quantized `xlm-roberta` model via `onnxruntime-node` directly in the process. This lets us avoid a Python sidecar while keeping the package ‘lightweight’ (still ~280MB for the quantized model, but acceptable for desktop environments).
2. The "Hallucination" Guard (Fuzzy Rehydration)
LLMs often "mangle" the XML placeholders during translation (e.g., turning `<PII id="1"/>` into `< PII id = « 1 » >`). We implemented a Fuzzy Tag Matcher that uses flexible regex patterns to detect these artefacts. It identifies the tag even if attributes are reordered or quotes are changed, ensuring we can always map the token back to the original encrypted value.
3. Semantic Masking
We are currently working on "Semantic Masking"—adding context to the PII tag (like `<PII type="PERSON" gender="female" id="1" />` ) to preserve (gender) context for the translation. For now, we are relying on a lightweight lookup-table approach to avoid the overhead of a second ML model or the hassle of fine tuning. So far this works nicely for most use cases.
The code is MIT licensed. I’d love to hear how others are handling the "context loss" problem in privacy-preserving NLP pipelines! I think this could quite easily be generalized to other LLM applications as well.