Help!? The output of my Core ML model is wrong…

26 July 2017 3 minutes

Table of contents

This is a question that I’ve seen asked multiple times over the past weeks on Stack Overflow, the Apple Developer Forums, and various Slack groups.

Usually it involves Core ML models that take images as input.

When you’re using Core ML, it is often not enough to just put your image into a CVPixelBuffer object. Even using Vision to drive Core ML won’t fix this issue.

What happens is that there is no standard format that deep learning models expect their images in. So you need to tell Core ML how to preprocess the image to convert it into the format that your model understands.

A CVPixelBuffer usually contains pixels in RGBA format where each color channel is 8 bits. That means the pixel values in this image are between 0 and 255.

Note: You can construct CVPixelBuffers using different pixel formats too, but RGBA is the most common. And by that I also mean ARGB, BGRA, and ABGR. These are all 32-bit formats where each color channel takes up 8 bits. If you’re using grayscale images, you need a CVPixelBuffer with format kCVPixelFormatType_OneComponent8.

But your model may not actually expect pixel values between 0 and 255. Here are some common options:

For grayscale images, it’s important to know what value is considered black and what value is considered white. I’ve seen models where 0 is black and 1 is white, and others where 1 is black and 0 is white.

You need to tell Core ML about the pixel values used by your model.

If your model expects pixel values in a different range than 0 – 255, then you need to tell Core ML so it can convert the CVPixelBuffer into the right format.

You do this in the Python script that converts the model.

For example, when you convert from Caffe or Keras you can specify the following options for coremltools.converters.caffe.convert() and keras.convert():

It’s very important that you pass in appropriate values for these options! With the wrong settings, coremltools will create a .mlmodel file that will interpret your input images wrongly. And then the model will produce outputs that don’t make sense.

Some examples:

If your model expects values in the range 0 – 1, you should set:

image_scale=1/255.0

If your model expects values in the range -1 to +1, you should set:

image_scale=2/255.0
red_bias=-1
green_bias=-1
blue_bias=-1

If you model was trained on the ImageNet dataset, you will probably need to subtract the mean RGB values:

red_bias=-123.68
green_bias=-116.78
blue_bias=-103.94

Scaling happens before the bias is added, so if you set an image_scale you will need to multiply your red/green/blue_bias etc by this scale as well.

For Caffe models you can also specify the path to your 'mean.binaryproto' file (if you have one of those) that contains the average RGB values. You would use this instead of red/green/blue_bias.

Conclusion: If you did not train the model yourself, but you’re using a pretrained model that you downloaded from the web, you should try to find out what sort of preprocessing is done on the images before they go into the first neural network layer. You need to make Core ML do the exact same preprocessing, otherwise the model will be working on data it does not understand — and that results in wrong predictions.

Written by Matthijs Hollemans.
First published on Wednesday, 26 July 2017.

If you liked this post, say hi on Twitter @mhollemans or by email matt@machinethink.net.

Core ML Survival Guide

Machine Learning by TutorialsThis guide is a massive collection of tips and tricks for working with Core ML and mlmodel files. For simple tasks Core ML is very easy to use… but what do you do when Core ML is giving you trouble? The solution is most likely in this 350+ page book! It contains pretty much everything I learned about Core ML over the past few years. Check it out at Leanpub.com

Machine Learning by Tutorials

Machine Learning by TutorialsAre you an iOS developer looking to get into the exciting field of machine learning? We wrote this book for you! Learn how machine learning models perform their magic and how you can take advantage of ML to make your mobile apps better. Plenty of real-world example projects, a bit of theory, not a lot of math. Get the book at raywenderlich.com

Get started faster with my source code library

I’ve recently created a source code library for iOS and macOS that has fast Metal-based implementations of MobileNet V1 and V2, as well as SSDLite and DeepLabv3+.

This library makes it easy to put MobileNet models into your apps — as a classifier, for object detection, for semantic segmentation, or as a feature extractor that’s part of a custom model.

Because this library is written to take advantage of Metal, it is much faster than Core ML and TensorFlow Lite! If you’re interested in using MobileNet in your app, then this library is the best way to get started. Learn more

Want to add machine learning to your app?

Let me help! I can assist with the design of your model, train it, or integrate it into your app. If you already have a model, I can optimize it to make it suitable for use on mobile devices. Read more about my services