I'm David, CEO of Expanso. Today we're launching Expanso Skills — a catalog of 200+ production-ready data processing pipeline recipes for AI agents.
The problem: Every team rebuilds the same data processing primitives from scratch. PII scrubbing, log aggregation, GDPR routing, schema enforcement, dead letter queues. Each time, slightly broken. Each team, from zero.
What we built: Reusable, composable pipeline recipes that run on Expanso Edge (our open-source distributed compute layer). Think npm packages, but for data processing.
A few examples: - `remove-pii` — Strips sensitive fields before data reaches your AI agent - `parse-logs` — 1,000 raw log lines → 1 structured JSON digest (99.9% reduction) - `cross-border-gdpr` — Routes data based on jurisdiction automatically - `dead-letter-queue` — Capture and retry failed pipeline messages - `fan-out-kafka` / `fan-out-s3` — Distribute processed data to multiple destinations - `enforce-schema` — Validate and coerce incoming data before it hits downstream services
Install and run in under 2 minutes: expanso skill install remove-pii expanso skill run remove-pii --input ./customer-data.csv
Why this matters for AI agents: Most agent frameworks assume agents can query whatever they want. That's a security and compliance disaster. Skills enforce least-privilege at the infrastructure level — agents never see raw data, only filtered, purpose-fit outputs.
Product Hunt: https://www.producthunt.com/products/expanso-skills?utm_sour...
Happy to answer anything — architecture, specific skill implementations, edge deployment. Ask away.