If you’ve ever dealt with geocoding at scale, you’ve probably hit two recurring problems:
Garbage in = garbage out. Addresses are often messy (“2nd floor”, “/”, abbreviations, multiple addresses in one line…). Most geocoders will fail or return incorrect matches if the input isn’t perfectly normalized.
A result isn’t always a correct result. Many providers return something even if it’s wrong — e.g. shifting a house number, or confusing similar street names. Assessing whether a geocoded result is actually right is surprisingly hard to automate.
Coordable tries to address both issues with AI and analytics:
Uses an LLM-based cleaner to normalize messy addresses (multi-country support).
Automatically evaluates geocoding accuracy by comparing input and output like a human would.
Lets you benchmark multiple providers (Google, HERE, Mapbox, Census, BAN, etc.) side by side.
Includes a dashboard to visualize results, quality metrics, and exports.
It’s not a new geocoder — it wraps existing APIs and focuses on data quality, comparison, and automation.
It’s currently in beta with free credits. If you work with geocoding or address data, I’d love to hear how you handle these challenges and what kind of analytics would be most useful to you.