It's a lightweight Python library for inline data validation—think quick checks on dicts, lists, function args, or API payloads without defining full schema classes like in Pydantic.
Why revive it? I kept running into cases where Pydantic (or Marshmallow) felt like overkill for scripts, CLIs, simple backends, or one-off data cleaning. I wanted something expressive but minimal: inline rules, decorators, no boilerplate models, built-in checks (email/url/phone/regex/range/length/unique/nullable/transforms/conditionals/nested), and clean error output.
Core ways to use it:
1. Standalone on data: ```python from validatedata import validate_data
data = {"username": "alice", "email": "alice@example.com", "age": 25} rules = {"keys": { "username": {"type": "str", "range": (3, 32)}, "email": {"type": "email"}, "age": {"type": "int", "range": (18, "any")} }}
result = validate_data(data, rules) if result.ok: print("Valid!") else: print(result.errors) # e.g. ["age: must be at least 18"]
Features include:
Shorthand rules like 'email', 'int:18:to:99', 'phone' Conditional (depends_on), transforms (strip/upper), mutation for cleaned data Nested fields/items, strict/no-coercion mode Powered by dateutil for flexible date parsing (ISO, natural-ish formats) MIT licensed, pytest on PRs, Python >=3.7, only optional dep (phonenumbers for phone)
Install: pip install validatedata Repo: https://github.com/Edward-K1/validatedata PyPI: https://pypi.org/project/validatedata/ Happy to hear feedback, bug reports, feature ideas, or use-case stories—especially if this saves anyone time on lightweight validation. Does this fill a gap for you, or am I missing something obvious? Thanks!