https://share.icloud.com/photos/018AYAPEm06ALXciiJAsLGyuA
https://share.icloud.com/photos/0f9IzuYQwmhLIcUIhIuDiudFw
The above took like 3 seconds to generate. That little box that says On-device can be flipped between On-device, Private Cloud Compute, and ChatGPT.
Their LLM uses the ANE sipping battery and leaves the GPU available.
There's a WWDC video "Meet the Foundation Models Framework" [1].
Why give this to developers if you haven’t been able to get Siri to use it yet? Does it not work or something? I guess we’ll find out when devs start trying to make stuff
Probably Apple is trying to distill the models so they can run on your phone locally. Remember, most, if not all, of Siri is running on your device. There's no round trip whatsoever for voice processing.
Also, for larger models, there will be throwaway VMs per request, so building that infra takes time.
What exactly are you referring to? Models do run on iPhone and there are features that take advantage of it, today.
> We do not use our users’ private personal data or user interactions when training our foundation models. Additionally, we take steps to apply filters to remove certain categories of personally identifiable information and to exclude profanity and unsafe material.
> Further, we continue to follow best practices for ethical web crawling, including following widely-adopted robots.txt protocols to allow web publishers to opt out of their content being used to train Apple’s generative foundation models. Web publishers have fine-grained controls over which pages Applebot can see and how they are used while still appearing in search results within Siri and Spotlight.
Respect.
leot•2h ago
zamadatix•2h ago
JKCalhoun•1h ago
rafram•1h ago
browningstreet•1h ago
ml-anon•1h ago
Most of his team are former Google brain so GDM knows who is good.