IBM Watson Services for Core ML Tutorial

In this tutorial, you’ll set up an IBM Watson account, then train a custom visual recognition Watson service model, and set up an iOS app to use the exported Core ML model.

Version

  • Swift 4, iOS 11, Xcode 9

Have you been exploring the exciting possibilities of adding machine learning (ML) to your apps with Apple’s Core ML and Vision frameworks? Maybe you’ve used your own data to extend one of Apple’s Turi Create models. Give a big welcome to the newest player on the field: IBM Watson Services, now with Core ML!

Note: Core ML models are initially available only for visual recognition, but hopefully the other services will become Core ML-enabled, too.

In this tutorial, you’ll set up an IBM Watson account, train a custom visual recognition Watson service model, and set up an iOS app to use the exported Core ML model.

Watson Services

You’ll be using Watson Studio in this tutorial. It provides an easy, no-code environment for training ML models with your data.

The list of Watson services covers a range of data, knowledge, vision, speech, language and empathy ML models. You’ll get a closer look at these, once you’re logged into Watson Studio.

The really exciting possibility is building continuous learning into your app, indicated by this diagram from Apple’s IBM Watson Services for Core ML page:

This is getting closer to what Siri and FaceID do: continuous learning from user data, in your apps!

Is this really groundbreaking? Right now, if a Core ML model changes after the user installs your app, your app can download and compile a new model. The app needs some kind of notification, to know there’s an update to the model. A bigger question is: why would the model change? Maybe by using a better training algorithm, but real improvements usually come from more data. Even better, if actual users supply the new data.

Even if you managed to collect user data, the workflow to retrain your model would be far from seamless. This is what could tip the balance in favor of Watson Services: the promise of easy — or at least, easier — integration of data collection, retraining and deployment. I’ll tell you more about this later.

Turi vs. Watson

Which should you use?

  • Turi and Watson both let you extend ML models for vision and language, but Watson exports Core ML only for visual recognition models.
  • Turi has an activity classifier, Watson doesn’t. Watson has Discovery, which sounds much more sophisticated than anything Turi has.
  • You need to write and run Python to use Turi to train models. Watson just needs your data to train the model.

Um Er … User Privacy?

The big deal about Core ML is that models run on the iOS device, enabling offline use and protecting the user’s data. The user’s data never leaves the device.

But when the user provides feedback on the accuracy of a Watson model’s predictions, your app is sending the user’s photos to IBM’s servers! Well, IBM has a state-of-the-art privacy-on-the-cloud policy. And no doubt Apple will add a new privacy key requirement, to let users opt into supplying their data to your model.

Getting Started

Carthage

Eventually, you’ll need the Carthage dependency manager to build the Watson Swift SDK, which contains all the Watson Services frameworks.

Install Carthage by downloading the latest Carthage.pkg from Carthage releases, and running it.

Or, if you prefer to use Homebrew to install Carthage, follow the instructions in Carthage readme.

IBM’s Sample Apps

From here, the roadmap can become a little confusing. I’ll provide direct links, but also tell you where the links are, on the multitude of pages, to help you find your way around when you go back later.

Start on Apple’s page: IBM Watson Services for Core ML. Scroll down to Getting Started, and Command-click the middle link Start on GitHub, under Begin with Watson Starters. Command-click opens GitHub in a new tab: you want to keep the Apple page open, to make it easier to get back to the GitHub page, to make it easier to get back to the Watson Studio login page — trust me ;]!

Download the zip file, and open the workspace QuickstartWorkspace.xcworkspace. This workspace contains two apps: Core ML Vision Simple and Core ML Vision Custom. The Simple app uses Core ML models to classify common DIY tools or plants. The Custom app uses a Core ML model downloaded from Watson Services. That’s the model you’re going to build in this tutorial!

Scroll down to the README section Running Core ML Vision Custom: the first step in Setting up is to login to Watson Studio. Go ahead and click the link.

Signing Up & Logging In

After you’ve gotten into Watson once, you can skip down to that bottom right link, and just sign in. Assuming this is your first time, you’ll need to create an account.

Note: If you already have an IBM Cloud account, go ahead with the sign up for IBM Watson step.

OK, type in an email address, check the checkbox, and click Next. You’ll see a form:

Fill in the fields. To avoid frustration, know that the password requirements are:

Password must contain 8-31 characters with at least one upper-case, one lower-case, one number, and one special character ( – _ . @ )

The eyeball-with-key icon on the right reveals the password, so you can edit it to include the necessary oddities.

Check or uncheck the checkbox, then click Create Account. You’ll get a page telling you to check your mailbox, so do that, open the email, and confirm your address.

Don’t follow any links from the confirmation page! They tend to lead you away from where you want to be. Get back to that login page for Watson Studio, and click the link to sign up for IBM Watson.

Note: If Watson thinks you’re already logged in, it will skip the next step and continue with the one following.

IBMid??? Relax, your IBM Cloud login will get you in! Enter the email address you used, and click Continue. On the next page, enter your password, and Sign in:

Interesting. I was in Vancouver when I created this account, not in the US South. But each of the Watson Services is available in only certain regions, and US South gives you access to all of them. So keep that _us-south appendage, or add it, if it isn’t there.

Note: This is a critical step! Without the US South region in your organization, you won’t be able to access any Watson Services. If you don’t see this _us-south appendage, you might be using an old IBM account. Try creating a new account, with a different email address. If that doesn’t work, please let us know in the forum below.

Click Continue, and wait a short time, while a few different messages appear, and then you’re Done!

Clicking Get Started runs through more messages, and spins a while on this one:

And then you’re in!

Remember: in future, you can go straight to the bottom right link, and just sign in to Watson.

Look at the breadcrumbs: you’ve logged into Services / Watson Services / watson_vision_combined-dsx. This is because you clicked the login link on the GitHub page, and that specifies target=watson_vision_combined. You’ll explore Watson Services later, but for now, you’ll be building a custom object classification model on top of Watson’s object classifier. IBM’s sample uses four types of cables, but you can use your own training images. I’ll give detailed instructions when we reach that step.

Note: This is an important and useful page, but it’s easy to lose track of it, as you explore the Watson site. To get back to it, click the IBM Watson home button in the upper left corner, then scroll down to Watson Services to find its link.

Creating a Custom Object Classifier

OK, back to the GitHub page: Training the model. You’ll be following these instructions, more or less, and I’ll show you what you should be seeing, help you find everything, and provide a high-level summary of what you’re doing.

Here’s the first high-level summary. The steps to integrate a custom Watson object classifier into IBM’s sample app are:

  1. Create a new project.
  2. Upload and add training data to the project.
  3. Train the model.
  4. Copy the model’s classifierId and apiKey to String properties in the sample app.
  5. In the Core ML Vision Custom directory, use Carthage to download and build the Watson Swift SDK.
  6. Select the Core ML Vision Custom scheme, build, run and test.

1. Creating a New Watson Project

Like most IDEs, you start by creating a project. This one uses the Visual Recognition tool and the watson_vision_combined-dsx service. The “dsx” stands for Data Science Experience, the old name for Watson Studio. Like NS for NextStep. ;]

Click Create Model, and wait a very short time to load the New project page. Enter Custom Core ML for the Name, and click Create.

Note: Various Watson people pop up from time to time. I don’t think they’re really there, but let me know if you have an actual chat with any of them.

After a while, you’ll see this:

2. Adding Training Data

The Watson visual recognition model is pre-trained to recognize tools and plants. You can upload your own training data to create a model that classifies objects that matter to your app. IBM’s sample project trains a model with photos of cables, to classify new images as HDMI, Thunderbolt, VGA or USB.

Note: To upload your own training images, instead of cable images, organize your images into folders: the names of the images don’t matter, but the name of each folder should be the label of the class represented by the images in it. There’s a link to IBM’s guidelines for image data in the Resources section at the end of this tutorial. If you want your model to recognize your objects in different lighting, or from different angles, then supply several samples of each variation. Then zip up the folders, and upload them instead of the sample zipfiles.

Click the binary data icon to open a sidebar where you can add zipfiles of training images:

Click Browse, and navigate to the Training Images folder in the visual-recognition-coreml-master folder you downloaded from GitHub:

Select all four zipfiles, and click Choose, then wait for the files to upload.

Check all four checkboxes, then click the refined hamburger menu icon to see the Add selected to model option.

Select that, then watch and wait while the images get added to the model.

Note: When you upload your own training images, you can also upload images to the negative class: these would be images that don’t match any of the classes you want the model to recognize. Ideally, they would have features in common with your positive classes, for example, an iTunes card when you’re training a model to recognize iPods.

3. Training the Model

The model is ready for training whenever you add data. Training uses the same machine learning model that created the basic tools and plants classifier, but adds your new classes to what it’s able to recognize. Training a model of this size on your Mac would take a very long time.

Close the sidebar, and click Train Model

Go get a drink and maybe a snack — this will probably take at least 5 minutes, maybe 10:

Success looks like this:

4. Adding the Model to Your App

The sample app already has all the VisionRecognition code to download, compile and use your new model. All you have to do is edit the apiKey and classifierId properties, so the app creates a VisionRecognition object from your model. Finding these values requires several clicks.

Note: For this step and the next, I think it’s easier if you just follow my instructions, and don’t look at the GitHub page.

Click the here link to see what the GitHub page calls the custom model overview page:

Click the Associated Service link (the GitHub page calls this your Visual Recognition instance name): you’re back at Services / Watson Services / watson_vision_combined-dsx! But scroll down to the bottom:

That’s the model you just trained!

Note: The GitHub page calls this the Visual Recognition instance overview page in Watson Studio.

Back to Xcode — remember Xcode? — open Core ML Vision Custom/ImageClassificationViewController.swift, and locate the classifierID property, below the outlets.

On the Watson page, click the Copy model ID link, then paste this value between the classifierID quotation marks, something like this, but your value will be different:

let classifierId = "DefaultCustomModel_1752360114"

Scroll up to the top of the Watson page, and select the Credentials tab:

Copy and paste the api_key value into the apiKey property, above classifierId:

let apiKey = "85e5c50b26b16d1e4ba6e5e3930c328ce0ad90cb"

Your value will be different.

These two values connect your app to the Watson model you just trained. The sample app contains code to update the model when the user taps the reload button.

One last edit: change version to today’s date in YYYY-MM-DD format:

let version = "2018-03-28"

The GitHub page doesn’t mention this, but the Watson Swift SDK GitHub repository README recommends it.

5. Building the Watson Swift SDK

The final magic happens by building the Watson Swift SDK in the app’s directory. This creates frameworks for all the Watson Services.

Open Terminal and navigate to the Core ML Vision Custom directory, the one that contains Cartfile. List the files, just to make sure:

cd <drag folder from finder>
ls

You should see something like this:

Audreys-MacBook-Pro-4:Core ML Vision Custom amt1$ ls
Cartfile			Core ML Vision Custom.xcodeproj
Core ML Vision Custom

Open the Core ML Vision Custom project in the Project navigator:

VisualRecognitionV3.framework is red, meaning it’s not there. You’re about to fix that!

Remember how you installed Carthage, at the start of this tutorial? Now you get to run this command:

carthage bootstrap --platform iOS

This takes around five minutes. Cloning swift-sdk takes a while, then downloading swift-sdk.framework takes another while. It should look something like this:

$ carthage bootstrap --platform iOS
*** No Cartfile.resolved found, updating dependencies
*** Fetching swift-sdk
*** Fetching Starscream
*** Fetching common-crypto-spm
*** Fetching zlib-spm
*** Checking out zlib-spm at "1.1.0"
*** Checking out Starscream at "3.0.4"
*** Checking out swift-sdk at "v0.23.1"
*** Checking out common-crypto-spm at "1.1.0"
*** xcodebuild output can be found in /var/folders/5k/0l8zvgnj6095_s00jpv6gxj80000gq/T/carthage-xcodebuild.lkW2sE.log
*** Downloading swift-sdk.framework binary at "v0.23.1"
*** Skipped building common-crypto-spm due to the error:
Dependency "common-crypto-spm" has no shared framework schemes for any of the platforms: iOS

If you believe this to be an error, please file an issue with the maintainers at https://github.com/daltoniam/common-crypto-spm/issues/new
*** Skipped building zlib-spm due to the error:
Dependency "zlib-spm" has no shared framework schemes for any of the platforms: iOS

If you believe this to be an error, please file an issue with the maintainers at https://github.com/daltoniam/zlib-spm/issues/new
*** Building scheme "Starscream" in Starscream.xcodeproj

Look in Finder to see what’s new:

A folder full of frameworks! One for each Watson Service, including the formerly missing VisualRecognitionV3.framework. And sure enough, there it is in the Project navigator:

Note: IBM recommends that you regularly download updates of the SDK so you stay in sync with any updates to this project.

6. Build, Run, Test

The moment of truth!

Select the Core ML Vision Custom scheme, then build and run, on an iOS device if possible. You’ll need to take photos of your cables to test the model, and it’s easier to feed these to the app if it’s running on the same device.

Note: To run the app on your device, open the target and, in the Bundle Identifier, replace com.ibm.watson.developer-cloud with something unique to you. Then in the Signing section, select a Team.

The app first compiles the model, which takes a little while:

Then it tells you the ID of the current model:

Note: If you get an error message about the model, tap the reload button to try again.

Tap the camera icon to add a test photo. The app then displays the model’s classification of the image:

The model isn’t always right: it kept insisting that my Thunderbolt cable was a USB, no matter what angle I took the photo from.

Note: I couldn’t see any obvious reason why you must add the apiKey and classifierId before you build the Watson Swift SDK, so I tried doing it the other way around. I downloaded a fresh copy of the sample code, and ran the carthage command in its Core ML Vision Custom directory: the output of the command looks the same as above, and the Carthage folder contents look the same. Then I added the apiKey and classifierId to the app, and built and ran it: the app didn’t download the model. Breakpoints in viewDidLoad() or viewWillAppear(_:) don’t fire! The app loads, you add a photo of a thunderbolt cable, and it classifies it as a hoe handle or cap opener — it’s using the basic visual recognition model.

TL;DR: Follow the instructions in the order given!

Show Me the Code!

So the sample app works. Now what code do you need to include in your apps, to use your models?

Actually, IBM presents all the code very clearly in the Visual Recognition section of their Watson Swift SDK GitHub repository README. There’s no Core ML code! The Vision Recognition framework wraps the Core ML model that’s wrapped around the Watson vision recognition model!

The only thing I’ll add is this note:

Note: To automatically download the latest model, check for updates to the model by calling the VisualRecognition method updateLocalModel(classifierID:failure:success) in viewDidLoad() or viewDidAppear(_:). It won’t download a model from Watson unless that model is a newer version of the local model.

Updating the Model

What I wanted to do in this section is show you how to implement continous learning in your app. It’s not covered in the GitHub example, and I haven’t gotten definitive answers to my questions. I’ll tell you as much as I know, or have guessed.

Directly From the App

You can send new data sets of positive and negative examples to your Watson project directly from your app using this VisualRecognition method:

public func updateClassifier(classifierID: String, positiveExamples: [VisualRecognitionV3.PositiveExample]? = default, negativeExamples: URL? = default, failure: ((Error) -> Swift.Void)? = default, success: @escaping (VisualRecognitionV3.Classifier) -> Swift.Void)

The documentation for non-Swift versions of this describes the parameters, but also announces:

Important: You can’t update a custom classifier with an API key for a Lite plan. To update a custom classifier on a Lite plan, create another service instance on a Standard plan and re-create your custom classifier.

To be on a Standard plan, you must hand over your credit card, and pay between US$0.002 and US$0.10 for each tagging, detection or training event.

Using Moderated User Feedback

But you shouldn’t send data directly from the app to the model unless you’re sure of the data’s correctness. Best practice for machine learning is to preprocess training data, to “fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated” — you know: garbage in, garbage out! So the idea of feeding uninspected data to your model is anathema.

Instead, you should enable the Data Preparation tool in the Tools section of your project’s Settings page:

Then your app should send positive and negative examples to a storage location, which you connect to your project as a data source. Back in Watson, you (or your ML experts) use the Data Refinery tool to cleanse the new data, before using it to train your model.

This information is from the IBM Watson Refine data documentation.

Watson Services

Curious about what else is available in Watson Services? Let’s take a look!

Find your Visual Recognition : watson_vision_combined-dsx page:

Command-click Watson Services to open this in a new tab:

Click Add Service to see the list:

Click a service’s Add link to see what’s available and for how much but, at this early stage, Watson generates Core ML models only for Visual Recognition. For the other services, your app must send requests to a model running on the Watson server. The Watson Swift SDK GitHub repository README contains sample code to do this.

Note: Remember, you can get back to your project page by clicking the IBM Watson home button in the upper left corner, then scrolling down to Watson Services to find its link.

Where to Go From Here?

There’s no finished project for you to download from here since it’s all IBM’s code and you can reproduce it yourself.

Resources

Further Reading

I hope you enjoyed this introduction to IBM Watson Services for Core ML. Please join the discussion below if you have any questions or comments.

Contributors

Comments