Mobile phones and tablets are now powerful enough to take advantage of the awesome opportunities offered by machine learning and artificial intelligence.
Here are some of the things your app can do with deep learning:
And much more… There’s no limit to what’s possible when you combine the raw power of deep learning with the phone’s camera and its many other sensors.
Deep learning turns your mobile into a truly smart assistant.
Companies such as Google, Amazon, and Microsoft offer deep learning services in the cloud — but there are many advantages to doing deep learning directly on the user’s device, for a more seamless user experience.
Getting deep learning to work well on mobile comes with its own set of challenges:
Machine Learning by Tutorials explains how to get started with machine learning for people who are already familiar with iOS development.
Core ML Survival Guide is for developers who are running into problems getting their models to work with Core ML — or who want to do advanced things that are not well documented elsewhere.
The MobileNet neural network architecture is designed to run efficiently on mobile devices. It’s a fast, accurate, and powerful feature extractor. I recommend using it over larger and slower architectures such as VGG-16, ResNet, and Inception.
Because MobileNet-based models are becoming ever more popular, I’ve created a source code library for iOS and macOS that has Metal-accelerated implementations of MobileNet V1 and V2.
This library makes it easy to add MobileNet-based neural networks into your apps, for tasks such as:
Because this library is written to take advantage of Metal, it is much faster than Core ML and TensorFlow Lite!
If you’re interested in using MobileNet in your app or as the backbone for a larger model, this library is the best way to get started. Click to learn more