Anchor-aware detection (set the user's original query as context to reduce false positives) Forensic root-cause tracing + ASCII chain visualization Built-in domain dictionaries (finance, healthcare, kubernetes, ML, devops, quantum) Local (Ollama) decipher mode — translates agent jargon to human-readable (Cloud soon) Integrations: Slack alerts, Notion/Airtable export, LangGraph/CrewAI wrappers
Privacy-first: local embeddings by default, nothing leaves your machine unless you opt into cloud decipher. Free tier works without an API key (local only). Also running limited lifetime deals for early supporters. Quick install: Bashpip install insa-its[full] Demos included:
Live terminal dashboard Marketing team agent simulation (watch shorthand emerge in real time)
GitHub: https://github.com/Nomadu27/InsAIts PyPI: https://pypi.org/project/insa-its/ Docs: https://insaitsapi-production.up.railway.app/docs Would love feedback — especially from anyone building agent crews or running multi-LLM systems in production. What’s your biggest pain point with agent observability? Thanks for checking it out!
Cristian
kxbnb•1w ago
Re: your question about observability pain points - the one I keep hitting is visibility at the external API boundary. Most agent observability (including OTEL-based traces) shows what the agent intended to send, but not necessarily what actually hit the wire when calling external services.
When an agent makes a tool call that hits Stripe, Shopify, or any third-party API, you want to see the actual HTTP request/response - not just the function call in your trace. Especially for debugging "works locally, fails in prod" scenarios or when the vendor says "your request was malformed."
I built toran.sh for this - transparent proxy that captures wire-level requests to external APIs. Complements tools like InsAIts since you get the inter-agent communication view and the external boundary view.
What's your take on capturing outbound API calls vs focusing on agent-to-agent communication?