What surprised us was that because Twitter's ranking algorithm adapts based on what you engage with, consistent filtering starts reshaping the recommendations over time. You're implicitly signaling preferences to the algorithm. For some of us it "healed" our feed.
Currently running inference from our own servers with an experimental on-device option, and we're working on fully on-device execution to remove that dependency. Latency is acceptable on most hardware but not great on older machines. No data collection; everything except the model call runs locally.
It doesn't work perfectly (figurative language trips it up) but it's meaningfully better than muting keywords and we use it ourselves every day.
Also promising how local / open models can now start giving us more control over the algorithmic agents in our lives, because capability density is improving.
Isolated_Routes•3h ago
millanjp•3h ago
On the mobile side, we're working to get 4B models running in the Apple Neural Engine. Main bottleneck for Mobile is actually battery life. Neither are quite optimized enough to formally brag about, but we're almost there!
Isolated_Routes•2h ago