Is there actually any hard data out there comparing the NPU on the Google Tensor G4 vs the Apple A18? I wasn't able to quickly find anything concrete.
I mean Apple has been shipping mobile NPUs for longer than Google (Apple: since A11 in 2017, Google: since 2021), and are built on (ostensibly) a smaller silicon node that Google's (G4: Samsung SF4P vs A18: TSMC N3E). However, the G4 appears to have more RAM bandwidth (68.26 GB/s vs 60 GB/s on A18).
In fact there’s no clear indication when Apple Intelligence is running on-device or in their Private Cloud Compute.
I am telling you this because I read between the lines that you believe current technology is a reason for you to be hopeful. Sure, it should be. But never forget, your child can do much more then you as a sighted person will ever be able to understand. Don't let them drown in your own misery. Let them discover what they can do. You will be surprised what they come up with. And dont fall for Gear Acquision Syndrome. Sure, tools are nice, and they do get better, which is also nice. I LOVE vision models, to stay on topic somehow. However, I still leave my house with only a cane and my phone in my pocket. I do occasionally ask Siri "Where am I" to get an address if I happen to have forgotten where I am exactly, currently. But at the end of the day, my cane is what shows me the way. Most tech is hype, plain old hearing and your sense of touch gets you much farther then you might think.
Wish you all the best for your own journey, and the development of your child.
I'd like to make a private Qwen or similar for my kids to prompt with a button and voice control. It doesn't need vision... Although eventually that'd be very cool.
Siri just sucks.
We might not be there yet...
opened issue for them to confirm this: https://github.com/apple/ml-fastvlm/issues/7
BryanLegend•3h ago
efnx•3h ago
static_void•1h ago