My bank also started offering a chatbot through WhatsApp to let you query transactions etc, but it hallucinated stuff so I’ve stopped that too.
Safari’s reader mode has a summary option I use. If I want something a little more verbose, I go to Kagi’s summary tool. Kagi also has an option to let me ask questions about the article. I find all these things pretty valuable.
I’ve also used the option in YouTube to ask questions about a video. Sometimes it sucks, as it doesn’t seem to actually watch/understand the video… but there are times it has saved me 40 minutes my answering the question posed in the title of the video that I wanted the answer to. Other times it gives me the timestamp to the point in the video that matters. It still feels very beta, but has some promise, epically with all these long rambling videos people make these days.
I’ve occasionally use Apple’s proofreading feature as well. It worked decently well.
If we are speaking more broadly, the ML/AI behind looking up subjects in the Photos app is very useful. I also see a lot of people using it to pull subjects out of pictures. It’s not generative AI, but it’s used by a lot of consumers everyday. The same can be said for even more invisible forms, like the photo processing on most smartphones.
My dad is really into bird photography in his retirement, and his camera can recognize a bird and focus on its eye. I assume that’s using some kind of AI, <insert xkcd 1425 here>.
pvg•4h ago