My brother and I built Sova AI (https://ayconic.io/sova), an Android agent that actually controls your installed apps.
We were incredibly frustrated with the current state of mobile AI. Built-in assistants like Gemini are deeply integrated into the OS, yet if you ask them to "Order an Uber to the airport" or "Send a Telegram message to friends group I'm late", they mostly just give you web search results or a button to open the app yourself. They don't do the work. (The Perplexity "assistant" is just a browser agent :/ )
So, we built an agent that does operate your phone. (NO root, NO adb, NO PC, NO appium/whatever, NO usb, NO browser)
How it works: You give Sova a prompt - either voice or text, you can make it a default assistant if you like. Instead of relying on non-existent official app APIs, Sova acts as a virtual human - clicks, scrolls, types etc. It uses the Android Accessibility API to read the screen's UI node tree. About AI models - currently we support main AI cloud providers (OpenAI, Gemini, Anthropic, Deepseek etc etc) and working towards support of local AI models on your host - Ollama, LM studio, etc. Pricing: 100% Free / Bring Your Own Key (BYOK) We aren't charging for the Sova engine right now. We built a BYOK system: you plug in your own API key (OpenAI, Claude, whatever you prefer), and you only pay the provider for the tokens you use. We figured out how to do this entirely on-device as a standard Kotlin app. No tethering to a PC, no Appium, no Root, and no Shizuku/ADB workarounds. Just an app even your granny can use.
The Google Play Ban: Because we use the Accessibility API for "universal automation" (literally mapping and clicking other apps), Google Play rejected our submission. It’s ironic: they banned us for building the exact agentic behavior that Gemini promises but fails to deliver. So, we are hosting the APK ourselves: https://sova.ayconic.io
The Challenges: Building this wasn't easy. Translating LLM outputs into reliable X/Y coordinates on dynamic Android screens across thousands of different device resolutions is a nightmare. Resizing of pictures in different model providers brought another complexities. It's not always acting as 100% and we had to fight our perfectionism a lot :)
We’d love for you to download the APK, plug in your key, and try to break it. What apps completely confuse the agent? Roadmap: support of local models with Ollama, LM studio or another tools, predefined rules and personas for your tasks, detailed statistics for you, support for Openrouter, enterprise Amazon Bedrock, Google Vertex and Azure Foundry models, support for IOS.
What would you like to see more?
Video demo is here https://www.youtube.com/watch?v=r-x6hRmtBy0 and APK is here: https://ayconic.io/sova We are here to answer your questions and listen to feedback in Telegram and Discord. It's not perfect yet, but it does its work.