Amazon Rekognition Custom Labels for Bears

Ed Miller
5 min readJul 25, 2023

In previous blogs about the Bearcam Companion application, I wrote about using Amazon Rekognition as a bear detector and automating it with a Lambda function. Rekognition saved me a lot of time in getting the application running. It can detect bears (and nearly 300 other objects), but it may not be a perfect fit for your application.

As you can see in the image above, sometimes Rekognition mis-labels bears as dogs, kangaroos, people, etc. I expect “bears” are not a big portion of the training data, especially in all the various poses we see on the bearcam. These mislabels need to be fixed before users can identify the bears in the application.

I don’t really want to spend so much time manually adjusting labels. For most machine learning, the next step would be to fine tune your model. You can essentially fine tune Amazon Rekognition by using Custom Labels. You can do this to make it better at detecting specific objects (like bears) or train it to detect new objects like your product or logo. It really depends on your application needs.

Custom Labels

There are 6 steps for Amazon Rekognition Custom Labels:

  1. Create project
  2. Create dataset
  3. Label images
  4. Train model
  5. Evaluate
  6. Use model

Step 1: Create project

You can create a Custom Labels project in the AWS console:

Step 2: Create dataset

Next, create a dataset. You can pre-split the dataset into train and test, or you can let Custom Labels do it for you.

There are several ways to import the images, in my case I imported them as a manifest file. The manifest can come from SageMaker GroundTruth or you can create your own. Since I’ve been collecting and updating image labels in Bearcam Companion, I generated my own manifest using a python script to extract data from DynamoDB Object and Image tables with links to images on S3. I uploaded the manifest file to S3 and provided the link in the form:

I had pre-split dataset: 80% for train and 20% for test. This resulted in 2486 training images and 605 test images. Many images have multiple bears, which increases the total number of labels. You can view these in the console:

Step 3: Label images

My manifest file already included labels from Bearcam Companion. If your data is not labeled, you can add labels directly in Custom Labels interface. You can also use GroundTruth for larger labeling jobs and farm them out to a broader workforce. I did find some errors in my dataset
like in the image below where the bear on the left is missing a bounding box. I adjusted labels in Custom Labels directly:

Step 4: Train model

When the dataset is ready, you can start training. This may take several hours. When completed, the model is ready to run. But first, let’s look at the model performance:

Step 5: Evaluate

Clicking on the model link will bring you to the evaluation page. Here you will find per label performance, such as F1, precision, recall and the assumed threshold used for evaluation. In my model, there’s only one label: bears, but you may have others.

View the test results to see performance on each image in test set. You can quickly filter on errors to see where your model may need improvement.

Here is a false negative example where the model missed one of the bears (correct in green; incorrect in red):

Here is a false positive example where the model labeled branches in the water as bear:

You may even find errors in the test set, like this image where the boxes in red should actually be correct.

You can fix the labels and retrain the model

Step 6: Use model

Once you are happy with the model, you can start (and stop) it in the console:

Example code for the model is provided. You can choose how many instances of your model you need. This will increase the throughput. Keep in mind, you are charged by instance-hours, that is, how many instances you have running for how many hours.

Conclusion

In this post, we learned how to create a specialized detector using Amazon Rekognition Custom Labels. We did this all in the console (other than creating our dataset). The process is simple and cost effective for many applications.

It turns out Custom Labels may not be good fit for Bearcam Companion. We only need 12 inference per hour, and I’m not sure if we can spin Custom Labels up and down frequently enough to keep the costs down. For our low usage, we prefer an on-demand model for bear detection. So one of the next projects is to train a bear detector using Amazon SageMaker.

We’ll save that for another time.

For more on Amazon Rekognition and Custom Labels, check out my AWS ML Heroes in 15: Amazon Rekognition for Wildlife Conservation talk from August 4, 2023.

Originally published at https://dev.to on July 25, 2023.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Ed Miller
Ed Miller

Written by Ed Miller

Senior Principal Engineer @ Arm | Director/Developer @ BearID Project | AWS Machine Learning Hero | Views are my own

No responses yet

What are your thoughts?