The new release includes a real-time multi-camera dashboard, semantic video search, and an upgraded detection pipeline.
Sentinel Core is built around an AI-native architecture:
- YOLOv8 for real-time object detection
- CLIP for semantic search (text-to-video queries)
- FAISS for vector indexing
- FastAPI backend with WebSocket streaming
- Fully air-gapped deployment capable
You can now:
- Stream multiple RTSP/IP camera feeds in a live grid view
- Search footage using natural language (e.g. “blue car near gate between 2-5pm”)
- Perform cross-camera semantic retrieval
- Configure zone-based alerts
- Run everything locally without cloud dependency
We designed this as a foundational engine for modern surveillance and physical AI systems.
Would appreciate feedback on:
- Architecture decisions
- Performance optimizations
- Semantic search accuracy
- Deployment improvements
Happy to answer technical questions.