C++ object detection opencv OpenCV Tutorials QT Qt Tutorials TensorFlow Tutorials

How to Create an Image Classifier Using Qt, OpenCV and TensorFlow

[Updated this post on April 04, 2019, to make sure this tutorial is compatible with OpenCV 4.x and TensorFlow 2.0]

On this publish we’re going to find out how to create an image classifier software with a proper GUI that permits the customers to choose a digital camera or a video file because the input and classify the incoming pictures (video or digital camera frames) in actual time. We’ll be utilizing the facility of Qt for cross-platform GUI creation and anything related to visualizing the output and allowing a seamless expertise for the consumer. We’ll even be using TensorFlow and OpenCV to handle the precise classification activity, along with accessing cameras and video information.

With the newer variations of OpenCV, a new module was launched for handling Deep Learning problems, which is getting better and higher by every release. This module known as “dnn” and I recommend getting the newest version of OpenCV (3.four.1 at the time of writing this text) to be sure to face no issues in any respect (or less issues if any). Using this new module, we will load and use deep studying models from fashionable 3rd social gathering libraries resembling TensorFlow, Caffe, DarkNet and so on. In our example challenge we’ll use pre-trained TensorFlow fashions, specifically ssd_mobilenet_v1_coco, nevertheless, you possibly can easily use different models too in case you get a firm grasp on all the info offered here.



We’ll be performing every little thing described right here in Windows operating system, so we’ll cover the conditions for Home windows. On different working techniques resembling macOS or Linux although, the only major change is the compiler which is both offered as a part of Xcode or GCC. So, here is what you need on Home windows:

• Microsoft Visual Studio 2017 (https://www.visualstudio.com)
• Qt5 (https://www.qt.io)
• OpenCV 3 (https://opencv.org)
• CMake (https://cmake.org)
• Python 64-bit (https://www.python.org)
• TensorFlow (https://www.tensorflow.org)

Ensure to set up the newest version of all dependencies mentioned right here. At the time of writing this article, it will be Qt5.10.1, OpenCV three.four.1, CMake three.10.2, Python three.6.4 and TensorFlow 1.6.

[Update April 04, 2019: Qt5.12.2, OpenCV 4.0.1 and TensorFlow 2.0 will also work, as long as you take into account the updated notes such as this one, throughout this tutorial.]

If your OpenCV installation doesn’t include 64-bit MSVC 15 libraries, you then want to construct them from supply by your self. You search my website for detailed guides on how to do exactly that! You’ll be able to proceed reading if in case you have all dependencies in place.



We’re going to create a Qt GUI software venture that uses CMake. This can be a good point to study this when you’ve got by no means used CMake with Qt Creator before. It can additionally permit lots easier integration of OpenCV libraries into our Qt challenge. So, start by creating a new undertaking as seen in the next screenshot (Select Plain C++ Software):

Press “Choose” and on the subsequent display make certain to identify your challenge “Image Classifier” (or anything you favor). Also select CMake and the “Build System” in the subsequent display:

qt choose cmake

After the undertaking is created, exchange all the contents of CMakeLists.txt file with the following (the feedback within the following code are meant as an outline to why every line exists at all):

# Specify the minimum version of CMake(3.1 is presently really helpful by Qt)
cmake_minimum_required(VERSION 3.1)

# Specify challenge title
challenge(ImageClassifier)

# To routinely run MOC when building(Meta Object Compiler)
set(CMAKE_AUTOMOC ON)

# To mechanically run UIC when building(Consumer Interface Compiler)
set(CMAKE_AUTOUIC ON)

# To mechanically run RCC when building(Useful resource Compiler)
set(CMAKE_AUTORCC ON)

# Specify OpenCV folder, and care for dependenciesand consists of
set(OpenCV_DIR “path_to_opencv”)
find_package(OpenCV REQUIRED)
include_directories($ OpenCV_INCLUDE_DIRS )

# Care for Qt dependencies
find_package(Qt5 COMPONENTS Core Gui Widgets REQUIRED)

# add required source, header, uiand resource information
add_executable($ PROJECT_NAME “main.cpp” “mainwindow.h” “mainwindow.cpp” “mainwindow.ui”)

# link required libs
target_link_libraries($PROJECT_NAME Qt5::Core Qt5::Gui Qt5::Widgets $OpenCV_LIBS)

You may also obtain the ultimate CMakeLists.txt file from right here:

http://amin-ahmadi.com/downloadfiles/qt-opencv-tensorflow/CMakeLists.txt

Just make certain to exchange “path_to_opencv” with the precise path to your OpenCV installation. That may be the folder the place “OpenCVConfig.cmake” and “OpenCVConfig-version.cmake” information exist. Don’t worry concerning the mainwindow entries as they are going to be added afterward.

Subsequent, exchange all the contents of “main.cpp” with the following:

#embrace “mainwindow.h”
#embrace

int essential(int argc, char* argv[])

QApplication a(argc, argv);
MainWindow w;
w.show();

return a.exec();

Now, to add a Essential Window to our software, select “New File or Project” from the “File” menu and then choose Qt Designer Type Class as seen under:

qt new class

Be sure that to select “MainWindow” within the subsequent display, as seen right here:

qt designer class

We’ll design a GUI comparable to the next:

qt creator

You can even get “mainwindow.ui” file from right here:

http://amin-ahmadi.com/downloadfiles/qt-opencv-tensorflow/mainwindow.ui

Within the “mainwindow.h” file, you want to start by including the required consists of, as seen right here:

#embrace
#embrace
#embrace
#embrace
#embrace
#embrace
#embrace
#embrace
#embrace
#embrace

We may even need the following personal members:

cv::dnn::Internet tfNetwork;
QGraphicsScene scene;
QGraphicsPixmapItem pixmap;
bool videoStopped;

The place tfNetwork is the Deep Studying Community class in OpenCV, scene and pixmaps are used for displaying the output correctly, and finally videoStopped is used as a flag to stop the video. We are going to maintain issues as simple as potential right here.

You’ll be able to obtain “mainwindow.h” file from here:

http://amin-ahmadi.com/downloadfiles/qt-opencv-tensorflow/mainwindow.h

“mainwindow.cpp” incorporates a lot of methods to cope with the consumer interactions however crucial piece of code in it, is the half liable for loading the TensorFlow model and configurations, and then performing detections. Right here is how it is carried out. To start with, the pretrained model is loaded into the network (you’ll see how and where to get the fashions afterward):

tfNetwork = readNetFromTensorflow(ui->pbFileEdit->text().toStdString(), ui->pbtxtFileEdit->textual content().toStdString());

pbFileEdit and pbtxtFileEdit in this code are two Qt Line Edit widgets that maintain the paths to the required information. The subsequent step is loading a video or an present digital camera on the computer. Using two radio buttons we will permit the users to change between digital camera and video mode and then open the chosen one, right here’s how:

VideoCapture video;
if (ui->cameraRadio->isChecked())
video.open(ui->cameraSpin->value());
else
video.open(ui->videoEdit->text().toStdString());

The place cameraRadio is a Radio Button, cameraSpin is a Spin Box and videoEdit is a Line Edit widget. Next thing we’d like to do is loop whereas studying video frames and processing them, until the video is ended or it’s stopped. Right here’s a simple answer for that:

Mat picture;

whereas (!videoStopped && video.isOpened())

video >> image;

// Detect objects …

qApp->processEvents();

There are various alternative ways to obtain responsiveness in GUI when performing such tasks, and this is considered one of them. The more beneficial method can be to move this a part of the code right into a QThread, however because it was mentioned earlier than, we’ll hold issues simple. The actual detection part is completed as seen under.

First create a BLOB suitable with TensorFlow models:

Mat inputBlob = blobFromImage(picture,
inScaleFactor,
Measurement(inWidth, inHeight),
Scalar(meanVal, meanVal, meanVal),
true,
false);

The values offered to blobFromImage are defined as constants and have to be offered by the network supplier (see the references part on the bottom to peek into where they arrive from). Here’s what they’re:

const int inWidth = 300;
const int inHeight = 300;
const float meanVal = 127.5; // 255 divided by 2
const float inScaleFactor = 1.0f / meanVal;

[Update April 04, 2019: You will only need inScaleFactor if you are using OpenCV 4.x and TensorFlow 2.0, pass it a value of 0.95]

To truly provide the blob to the network and get the detection results, do the following:

tfNetwork.setInput(inputBlob);
Mat end result = tfNetwork.ahead();
Mat detections(outcome.measurement[2], end result.measurement[3], CV_32F, end result.ptr());

In the preceding code we merely set the enter of the community to the prepared blob, then calculate the result of the community utilizing the ahead() technique, and finally create a detections Mat class which has rows and columns equal to the third (peak) and fourth (width) component of the Mat::measurement. See the next link, or the documentation for imagesFromBlob perform in the event you really feel misplaced:

https://docs.opencv.org/3.4.1/d6/d0f/group__dnn.html#ga4051b5fa2ed5f54b76c059a8625df9f5

The subsequent half is extracting detections (based mostly on an acceptable threshold), then getting the bounding bins for objects, printing the identify of the detected object class over it (from a previously loaded string of labels), and lastly displaying it:

for (int i = 0; i < detections.rows; i++) float confidence = detections.at(i, 2);

if (confidence > confidenceThreshold)

using namespace cv;

int objectClass = (int)(detections.at(i, 1));

int left = static_cast(
detections.at(i, three) * image.cols);
int prime = static_cast(
detections.at(i, four) * picture.rows);
int right = static_cast(
detections.at(i, 5) * picture.cols);
int backside = static_cast(
detections.at(i, 6) * picture.rows);

rectangle(picture, Level(left, prime),
Level(right, bottom), Scalar(zero, 255, zero));
String label = classNames[objectClass].toStdString();
int baseLine = 0;
Measurement labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX,
0.5, 2, &baseLine);
prime = max(prime, labelSize.peak);
rectangle(picture, Point(left, prime – labelSize.peak),
Point(left + labelSize.width, prime + baseLine),
Scalar(255, 255, 255), FILLED);
putText(picture, label, Level(left, prime),
FONT_HERSHEY_SIMPLEX, zero.5, Scalar(zero, 0, 0));

pixmap.setPixmap(
QPixmap::fromImage(QImage(image.knowledge,
image.cols,
picture.rows,
image.step,
QImage::Format_RGB888).rgbSwapped()));
ui->videoView->fitInView(&pixmap, Qt::KeepAspectRatio);

You’ll be able to obtain mainwindow.cpp file from right here:

http://amin-ahmadi.com/downloadfiles/qt-opencv-tensorflow/mainwindow.cpp

Our software is greater than prepared! But we still want to get and prepare a TensorFlow community, so, let’s transfer on to the subsequent section.



To start with, begin by downloading a pre-trained model from TensorFlow mannequin zoo:

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

We can be utilizing ssd_mobilenet_v1_coco which you’ll be able to immediately obtain from right here:

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz

[Update April 04, 2019: Or you can use a more recent version such as this one.]

Extract it to get ssd_mobilenet_v1_coco_2017_11_17 folder with the pre-trained information.

You want to get the textual content graph file for the model, one that is suitable with OpenCV. To do this you will need to use the following script:

opencv-source-filessamplesdnntf_text_graph_ssd.py

In the event you don’t have OpenCV source information (which would be fairly unusual at this point), you will get the script from right here:

https://github.com/opencv/opencv/blob/master/samples/dnn/tf_text_graph_ssd.py

Just copy it to ssd_mobilenet_v1_coco_2017_11_17 folder and execute the next:

tf_text_graph_ssd.py –enter frozen_inference_graph.pb –output frozen_inference_graph.pbtxt

Replace April 04, 2019: In newer versions of OpenCV, you need to also copy tf_text_graph_common.py file, in addition to tf_text_graph_ssd.py, into the folder talked about above and execute the next command:

tf_text_graph_ssd.py –enter frozen_inference_graph.pb –output frozen_inference_graph.pbtxt –config pipeline.config

Class names for this model may be discovered right here:

https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_label_map.pbtxt

This file gained’t be of use for us the best way it’s, so here’s a right here’s an easier (CSV) format that I’ve ready to use to show class names when detecting them:

http://amin-ahmadi.com/downloadfiles/qt-opencv-tensorflow/class-names.txt

Now we now have every part we’d like to run and check our classification app in action.



Run the appliance in Qt Creator and change to the Settings tab at the bottom of the display. You’ll be able to choose the input information in this page, as seen within the screenshot:

settings image classifier

Now change again to the Controls tab and hit the Begin button. Attempt objects that exist in the class names listing and see how the results are. Here’s what I received which is fairly good if we ignore the fact that my iPad shouldn’t be precisely a TV:

image classifier

Following pages, a method or one other, helped me rather a lot while scripting this information:

https://github.com/opencv/opencv/tree/master/samples/dnn

https://www.tensorflow.org/tutorials/image_retraining

For more tutorials and a comprehensive guide to power-up your cross-platform and pc vision software improvement expertise, you can even get my guide, “Pc Imaginative and prescient with OpenCV three and Qt5“, from Amazon: