presumably local-first
it is a problem of ones own making
A lot of people also expect the software to add features over time. In the old days, you'd ship a brand new major version and charge people for that and stop working on the old one. With the App Store, I suppose you could technically abandon the old version and sell a whole new version, but then all your old users will be annoyed if the app is removed from the store or no longer works when they update their OS. You could gate new features behind a paywall, and I know some apps do this, but then it adds to the complexity of the app as you have to worry about features that work for some users but not others.
I think people also expect software nowadays to be cheap or free, I think due to large corporations being able to fund free stuff (say gmail) by other means (say ads or tracking users). That means users would balk if you asked them to pay $50 for your little calendar app, so if you did ask for a one-time payment, it would be $5-$10, which is nowhere near enough to recoup whatever time you spent, unless you hit it big. Hitting it big nowadays with an app is difficult since there's so much competition in the App Stores and everyone has raced to the bottom to sell apps for pennies.
Unrelated, but I love coming across religious "hacks" like these that communities have developed over the years.
A similar one is the fishing line that jews tied around New York to get around the rules of Sabbath https://www.npr.org/2019/05/13/721551785/a-fishing-line-enci....
If you charge a premium, customers will have high expectations.
However, while we are on the topic of planning apps, you should know the Todoist added the best use of AI I've ever seen. It's called Ramble mode and you can just talk and instantly it'll start showing a list of tasks that update as you go. It is extraordinary. I'm considering switching away from tasks.org for this one feature.
Here's a short video of it: https://www.youtube.com/watch?v=DIczFm3Dy5I
You need paid (free trial is ok) and to enable experiments before you can access it.
Anyone know how they might have done this?
However it's still wild to me how fast and responsive it is. I can talk for 10 seconds and then in ~500ms I see the updates. Perhaps it doesn't even transcribe and rather feeds the audio to a multimodal llm along with whatever tasks it already knows about? Or maybe it's transcribing live as you talk and when you stop it sends it to the llm.
Anyone have a sense of what model they might be using?
I want to say 300ms which would coincide with your 500ms example
I am also too lazy to google or AI it but it’s something I remember from when I taught ESL long ago.
LLM to types and done
But yes, sub vs non-sub model is a very divisive topic. Personally would never subscribe to something like a offline local todo list
- monthly subscription
- or pay one time fee of ~ 6-month subscription and own it forever
To be honest, in this case the subscription is cheaper for the average user, because most cancel in under six months.
qwertytyyuu•4h ago
lugarlugarlugar•4h ago
https://www.inkandswitch.com/essay/local-first/
embedding-shape•3h ago
But, Ink&Switch rule regardless, I love what they're doing and everyone would be better off doing "local-first" in the way they suggest, don't get me wrong.