• Sabina Pokhrel

Real-Time Vehicle Detection with MobileNet SSD and Xailient

Updated: Aug 26



From vehicle counting and smart parking systems to Autonomous Driving Assistant Systems, the demand for detecting cars, buses, and motorbikes is increasing and soon will be as common of an application as face detection.


And of course, they need to run real-time to be usable in most real-world applications, because who will rely on an Autonomous Driving Assistant Systems if it cannot detect cars in front of us while driving. 


In this post, I will show you how you can implement your own car detector using pre-trained models that are available for download: MobileNet SSD and Xailient Car Detector.

Before diving deep into the implementation, let’s gets a bit familiar and know about these models. But feel free to skip to the code and results if you wish. 


MobileNet SSD

MobileNet is a light-weight deep neural network architecture designed for mobiles and embedded vision applications.


MobileNet architecture [1]

In many real-world applications such as a self-driving car, the recognition tasks need to be carried out in a timely fashion on a computationally limited device. To fulfil this requirement, MobileNet was developed in 2017. 


The core layers of MobileNet is built on depth-wise separable filters. The first layer, which is a full convolution, is an exception. 


To learn further about MobileNet, please refer to the paper.


Around the same time (2016), SSD: Single Shot detector was also developed by Google Research team to cater the need for models that can run real-time on embedded devices without a significant trade-off in accuracy. 


SSD Architecture [2]

Single Shot object detection or SSD takes one single shot to detect multiple objects within the image. The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes.


It’s composed of two parts:

1. Extract feature maps, and

2. Apply convolution filter to detect objects


SSD is designed to be independent of the base network, and so it can run on top of any base networks such as VGG, YOLO, MobileNet.


In the original paper, Wei Liu and team used VGG-16 network as the base to extract feature maps.


To learn further about SSD, please refer to the paper.


To further tackle the practical limitations of running high resource and power-consuming neural networks on low-end devices in real-time applications, MobileNet was integrated into the SSD framework. So, when MobileNet is used as the base network in the SSD, it became MobileNet SSD.


MobileNet SSD overview [7]

The MobileNet SSD method was first trained on the COCO dataset and was then fine-tuned on PASCAL VOC reaching 72.7% mAP (mean average precision).


MobileSSD for Real-time Car Detection


Step 1: Download pre-trained MobileNetSSD Caffe model and prototxt.


We’ll use a MobileNet pre-trained downloaded from https://github.com/chuanqi305/MobileNet-SSD/ that was trained in Caffe-SSD framework.


Download the pre-trained MobileNet SSD model and prototxt from here.

MobileNetSSD_deploy.caffemodel

MobileNetSSD_deploy.prototxt


Step 2: Implement Code to use MobileNet SSD



Because we want to use it for real-time application, lets calculate the frames it processes per second as well.

(Parts of this code is inspired from PyImageSearch blog.)


Experiments:


I ran the above code on two different devices:

1.     On my dev machine, which is Lenovo Yoga 920 with Ubuntu18.04 operating system.

2.     On low-cost, resource-constrained device, which is Raspberry Pi 3B+ with Raspbian Buster operating system.


Results:

MobileNet SSD Results


On my dev machine, Lenovo Yoga, with MobileNet SSD, I got an inference speed of 23.3 FPS and when I ran RaspberryPi 3B+, the inference speed was 0.9 FPS, using all 4 cores.

Pretty dramatic. This experiment shows that if you have a powerful device to run the MobileNetSSD, it performs well and will serve the real-time requirement.  But if your application is targeted to be deployed on a computationally limited IoT/embedded device such as the Raspberry Pi, this does not seem to be a good fit for a real-time application.


Xailient

Xailient model uses selective attention approach to perform detection. It is inspired by the working mechanism of the human eye.

Xailient models are optimized to run on low power devices that are memory and resource-constrained. 


Now let’s see how Xailient Pre-trained Car detector performs.


Xailient Car Detector for Real-time Car Detection


Step-1: Download pre-trained Car Detector model.

We’ll use a Xailient’s pre-trained car detector model downloaded from console.xailient.com.


Step 2: Implement Code to use Xailient Car detector mode


Experiments:

I ran the above code the same two sets of devices:

1.     On my dev machine, which is Lenovo Yoga 920 with Ubuntu18.04 operating system.

2.     On a low-cost, resource-constrained device, which is Raspberry Pi 3B+ with Raspbian Buster operating system.


Results:

Xailient Car Detection Results

On dev machine, there is a slight improvement on inference speed when using Xailient Car Detector even when only 1 core is used. On Raspberry Pi, however, Xailient processes 8x more frames per second with a single core. 


Summarizing the results of both models:

MobileNetSSD vs Xailient

The video I used for this experiment was downloaded from Pexels.com


In this post, we looked the need for real-time detection models, briefly introduced MobileNet, SSD, MobileNetSSD and Xailient, all of which were developed to solve the same challenge: to run detection models on low-powered, resource-constrained IoT/embedded devices with a right balance of speed and accuracy. We used pre-trained MobileNetSSD and Xailient car detector models and performed experiments on two separate devices: dev machine and a low-cost IoT device. Results show a slight improvement in speed of Xailient Car detector over MobileNetSSD, in the dev machine and a significant improvement in the low-cost IoT device, even when only 1 core was used.



If you want to extend your car detection application to car tracking and speed estimation, here’s a very good blog by PyImageSearch.


References

1. Howard, Andrew G., et al. "Mobilenets: Efficient convolutional neural networks for mobile vision applications." arXiv preprint arXiv:1704.04861 (2017).

2. Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.

3. https://www.pyimagesearch.com/2019/12/02/opencv-vehicle-detection-tracking-and-speed-estimation/

4. https://honingds.com/blog/ssd-single-shot-object-detection-mobilenet-opencv/

5. https://github.com/chuanqi305/MobileNet-SSD

6. https://mc.ai/object-detection-with-ssd-and-mobilenet/

7. https://machinethink.net/blog/mobilenet-v2/#:~:text=SSD%20is%20designed%20to%20be,detection%20portion%20of%20the%20network.

149 views

Australia:

11 York Street, Level 8, Sydney, NSW 2000

Tel: +61 434 965 010

USA:
440 N Wolfe Rd, Sunnyvale, CA 94085

8 The Green Suite 6970, Dover, DE 19901

Tel: +1(310)359-8357