Hoping I can use them on device like iOS CoreML.
Before I get started it doesn't hurt to ask to see if anyone has done this before or has a sound model that detects patters, Anger, excitement, Arousal, laughing, crying, laughing, human emotions in general.
I found this :https://medium.com/@narner/classification-of-sound-files-on-ios-with-the-soundanalysis-framework-and-esc-10-coreml-model-3a5154db903f
But maybe there is a better way or something already exists.
Thanks