Texas Precious Metals Review

Since its inception a decade ago, Texas Precious Metals has grown to become one of the largest full-service precious metal dealers in the industry. In 2015, Inc Magazine deemed it the “№200…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Introduction to OpenCV with Python part II

This is a second part tutorial introducing some basics of OpenCV with problems proposed by my digital image processing engineering class.

Histogram is a representative way of showing numerical data based on its distribution through a range.

For images, a histogram represents how the pixels are distributed by its colors, giving a probabilistic idea of how frequent some color are in the image. Histograms can be used in automatic image segmentation, movement detection and granulometry, for example.

In openCV with python we have a bult-in function to do the job for us:

cv2.calcHist([image], channels, mask, histSize, ranges[, hist[, accumulate]])

In openCV we have a function also to make the equalization of an image, called equalizeHist, which has only the image to be equalized as parameter. So now, by calling the two functions on an image we can calculate the histogram for the original picture and then for the equalized one.

Original Lenna gray-scaled

Proceeding into the code, we may want to see how the histograms look like for both pictures. The histogram calculation made by the function calcHist() does not return an image, instead it returns plottable data. For plotting the histograms we use the matplotlib.

As a result we see how the pixels get more spread out through the range.

Representation of the original histogram (blue) and the equalized histogram (red)

Let’s suppose now that we have a continuous capture of a scenario and we keep calculating the histogram for the frames. If the histogram changes it means something new showed up, if the new histogram is way different we can make assumptions or set an alarm.

Although we are going to be using the last program as a support, for this problem it’s going to be introduced the webcam function to capture images continuously. Likewise all functions used until now, openCV has a built-in class for this called VideoCapture. The object we create then can read frames from a camera, usually for tests the webcam.

To keep capturing the frames from the camera we are going loop over the camera read and consequently calculate its histograms. This is not an scalable way to do it, since we are processing too much frames, we could improve it to process one frame in four for example. Anyways, to make the problem easier it’s done in gray-scaled, converting the image from the camera using the openCV function cvtColor.

The numerical difference adopted was 2000, based on the openCV example that is not a drastic change but something clearly changed. So, if we get something bigger than 2000 on the difference, act!!! To get a visible change, was implemented a border change to all white. While the difference keep on bigger than the maximum determined, we have images with white borders shown.

It’s needed to release the camera object after done capturing, so the camera closes properly and won’t crash the execution.

Let’s introduce now an important concept for when processing digital images: digital convolution. Digital convolution can be defined as

being I(x, y) and K(x, y) the image and the kernel respectively. The kernel is a matrix usually composed of odd symmetrical dimensions (3x3, 5x5) and integer values. The values of the matrix will determine how the image will look like after the constitutional operation on the image.

The image above describes well how the convolution works, the kernel chosen will be placed upon each pixel of the image and a summation of products between the matrices will become a single pixel value on the new image. The mask then keeps moving and calculating the new pixels until we have filled a whole new picture.

This process of convolution can be understood as filtering, with the kernels being filters. There are many existing established filters to different goals, such as borders detection, noise reduction, blur, etc.

Now, we are going to implement a python code to filter an image given some options to the user.

The filters are defined as shown below besides the “gaussian laplacian”, which is going to be implemented by applying a filter on a filtered image as we will show further ahead. Yet, the values need the be defined as the type float due to the fact that the operations will face floating point numbers.

Some masks need to do some standard operations over the filters, while others don’t. Here we are going to show the mean mask being generated from an scale add between the mean filter multiplied by 1/9 and summed with an all zero matrix. The perks of each mask generation won’t be treated on this tutorial, only it’s usage.

Now that we have a mask let’s start manipulating images. We are going to define a bool variable for the absolute filter, present the menu to the user and load a gray-scaled image for tests. The image has to be converted into a same type as the mask for the operation be possible.

Inside an infinite loop we can iterate over the picture loaded and choose which mask we will pass over the image. To get started we have already calculated the mean mask and set the absolute to True, so let’s start applying that and see the result.

Transforming the matrix values from float to int to generate an image, we have as result

Lenna in B&W with no filter
Lenna after mean mask and absolute

Now we have the option treatment for the user. The waitKey function now does not expect any key anymore, how it has been so far on the tutorials, it compares the key pressed to some result. Then, for each key we calculate a new mask to apply on the filter2D function. Using Lenna’s picture as usual, we find some interesting results.

Vertical mask with abs (originally True)
Vertical mask with no abs (Pressed “v” and “a”)
Horizontal with abs
Laplacian with abs
Laplacian with no abs

Now to implement a combination of filters, called gaussian laplacian we first apply the gauss filter through the filter2D and call the variable mask to have the laplacian filter, so the next filter2D will apply it. This way we make sure the laplacian was done after the gauss.

Gaussian Laplacian with no abs

This examples used the Lenna picture but could have been done through the webcam how it was mentioned before. Here a quick adaptation to how it would be to make it more interesting applying the masks in real time.

We would not need to call the Lenna picture before the loop only the camera object. Inside the loop we need then to read the frames and transform it to gray-scaled in float32 format. It’s done!! Just don’t forget to release the camera.

Add a comment

Related posts:

The Certainty of Nonsense

I will bring you pain one day, no matter who I am to you. Nothing lives in guarantee. Ignorance is foolproof, but questions are born naked and feral. All things unremembered chase their tail round…

Workflow migration

A well defined workflow help on management process. And there are many tools in market that helps defining these workflows, but monitoring FPSO platforms has a lot of complexities that challenges…

The Photograph That Helped Me Heal

Many years ago my friend Kathy approached me and asked if I would be interested in helping her supply a new orphanage in Cambodia with furniture, clothing and essentials. She explained that gathering…