Locations

Vision System for Industrial Robots

 
11. Software: Find_And_Classify_Objects module
Update #10196  |  30 Apr 2015
 

Find_And_Classify_Objects application


Main tasks of this module are:

  •  Extract objects from background
  •  Allow user configuration of finding quality settings
  •  Determine objects image position and orientation
  •  Allow user model object configuration
  •  Classify objects
     

How to use this module to achieve good results?


Module use data from Background_Area_Configuration [1] class so it is necessary to calibrate background settings at first. Next step is to hit “Re-calibrate” and determine finding quality using sensitive and size settings. In order to do this right, place objects which you will use and manipulate sensitive scrollbar until you will see only objects on black-white screen (objects are marked on white and the rest of screen should be black). To get better results you may change contrast parameter. If you want to get rid of noises change size settings, their value should be close to minimum and maximum size of your smallest and biggest object. After this program is ready to determine objects image position.

Considering object orientation, user have 3 options: no orientation, rectangle method, PCA method. First one simply turn off orientation detection. Rectangle algorithm is written by me, designed for cubic objects and it is optimized for industrial robot. PCA method use OpenCV functions and I recommend this for other than square shapes. Just simply test all possibilities and choose one that fit your expectations and head to next step.

Now if you wish to use system for classify purpose, model object has to be specified. For auto calibration remove all objects except model one and then hit “Get new model” button. After this application will take information about object and your work is to determine size and color offset.

Also there is tracking mode available, that has been created for position measuring of mobile robots. If you will use system as a cyclic measuring tool for mobile robots turn this option on, but if you wish only to locate objects once a time and then send data about their position to industrial robot turn off tracking mode. I will tell more about this mode in update dedicated to mobile robots position measuring.

Another available option is "Only model objects" mode, that ignore objects other then model. Remember if you wish to use this mode, calibrate and test model object settings at first using "All objects" mode.

After all configuration is done and results are satisfactory just simply hit “Calibrate” and then “Save settings” button. Below you can see result of successful calibration with model object specified.


Find_And_Classify_Objects - setting model

 


Find and Classify Objects - results with rectangle orientation method

 


Find and Classify Objects - results with rectangle orientation method (only model objects mode)

 

Methodology - how does Find_And_Classify_Objects works?


Find_Objects and Classify_Objects are key functions of this module and they are responsible for image processing and filling three data containers:

  •  std::vector<cv::Point2f> Objects_Mass_Centers – position data of objects mass centers located on image
  •  std::vector<int> Objects_Orientation – data of objects orientation
  •  std::vector<bool> Objects_Validation – result data from classify functions

But before I will write about how they do its work, I would like to show block diagram of image processing:


Find And Classify Objects - image processing diagram

 

Static bacground image come from Background_Area_Configuration [1] module and current frame from camera is delivered by Camera_Image [2]. Image converting is done using cv::cvtColor [3], then I use cv::absdiff [4] for background subtraction. This action leave gray shapes of objects that are different color then background in result. Then using function cv::Mat::convertTo [5] I increase contrast to make gray shapes brighter for better filter results. Filtering is done using cv::inRange [6] function, but this time it use only one variable that is connected with sensitive scrollbar on user interface. Value of this variable describe how much image pixels can be different from background on result image (perfect setting is that when only our objects will pass) and binary image is created. After that I look for contours of objects and calculate their mass centers with cv::findContours [4] and cv::Moments [4] (same process as described in background module update) and put them in vector that will be shared with other classes to calculate real position.

Next step is to set model object parameters. Auto model setting is done using my function Get_Color that take 10x10 pixel square from center of object and calculate average RGB values. Function use cv::Vec3b [5] and cv::Mat::at [5] in order to get pixel colors. Then I determinate size of model object by cv::ContourArea [4] and assign retrieved data to variables responsible for holding model information.

Classification of objects is performed using information about model and actual readings for each object that has been found on image. If object has close parameters (how close they have to be determine offset settings) to model it gets true value in vector which holds information about objects validation. Also good objects has green color and bad objects has red color contours on result screen for easy recognition.

After classification is done, I determine orientation of found objects. First method is Calculate_Orientation_PCA, that simply use cv::PCA [6] function and mathematical operations to draw lines and fill variable responsible for orientation. Second method is my function Calculate_Orientation_Rect made specifically for cubic objects. In order to get orientation I fit rectangle in object contour using cv::RotatedRect [5] and cv::minAreaRect [4], and then I do mathematical calculation using corners position of created rectangle to determine orientation of object. Method is optimized for industrial robot operations - angle is limited to values from -45 to 45 degrees +/- rotation of local base relative to image base.

Data containers mentioned at the start will be shared with position calculation module in order to calculate objects real position in local base (module under construction). Also objects validation information will be placed in communication frame for robot to decide what to do with good and bad objects.

 

References


[1] Background_Area_Configuration information: 
http://challenge.toradex.com/projects/10165-vision-system-for-industrial-robots/updates/10195-10-software-background_area_configuration-module

[2] Camera_Image information:
http://challenge.toradex.com/projects/10165-vision-system-for-industrial-robots/updates/10191-9-software-camera_image-and-camera_configuration-modules

[3] cv::cvtColor documentation:
http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html

[4] cv::absdiff, cv::ContourArea cv::minAreaRect documentation:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html

[5] cv::Mat::convertTo, cv::Vec3b, cv::Mat::at, cv::RotatedRect documentation: 
http://docs.opencv.org/2.4.9/modules/core/doc/basic_structures.html

[6] cv::PCA documentation: 
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html

 

Comments