Since Jones and Viola introduced their game changing face detection algorithm in 2001, face detection has been advancing extremely fast. Similarly, face recognition has improved by leaps and bounds. Some of the standard methods found in the OpenCV API lag behind the frontier in terms of accuracy.
To make some of the more recent techniques easier to access for high-level developers, facetools abstracts away the details of the detection and recognition methods found in dlib. It wraps two face detection algorithms from dlib (including a state of the art of the art deep learning method), and a state of the art deep learning face recogniser from dlib.
A similar face searching tool called ‘facegrep’ (inspired by the Unix grep tool) is implemented on top of the facetools framework for when you want to conveniently find friends or family in your photo albums.
In the last post, we saw how convolutions could be used to sharpen an image by looking for areas of contrast. Using a similar intuition, we now consider ways to detect edges in an image. Detecting edges can be an important building block for tasks processing tasks like feature detection, and extraction.
While many different methods, and models exist for detecting edges, one simple way is to look for areas of fast changes in pixel gradient, i.e., looking at derivatives (calculus) in an image. The idea is that an edge is more likely occur where there are large sudden changes in contrast.
Continue reading “Some common convolutions for image processing: edge detection”
This is the next post in our series of convolution filter examples. We examine one method of image sharpening using the unsharp sharpening mask (unsharp filter). You can think of sharpening as the opposite of blurring. The previous post showed how blurring was like taking an average. The process reduces the size of the differences between neighbours, causing a blurring effect.
Sharpening on the other hand, emphasises differences between neighbouring pixel values, increasing the contrast between pixels.
Continue reading “Some common convolutions for image processing: sharpening”
In a previous post, we defined a special case of a convolution commonly seen in image processing applications, and gave some code to apply these filters to images.
Now that we’ve seen what a linear filter is, we will look at some examples of commonly used kernels in image processing for things like blurring, and edge detection. For each, we will briefly discuss the rationale behind picking those particular kernels.
A combination of these different convolutions allows one to do a surprisingly large amount of things, including in computer vision applications.
While some of these operations can be achieved using far more sophisticated approaches, applying a simple convolution often achieves pretty good results.
This post focuses on blurring in particular.
Continue reading “Some common convolutions for image processing: blurring”
Many techniques in image processing involve transformations that make use of neighbouring information. One simple class of transformations just takes a linear combination of some fixed neighbours. This class of transformation can be used to perform operations like blurring, sharpening, and edge detection. A surprisingly large amount of image manipulation can be done effectively using a combination of these transformations. They are sometimes referred to as linear filters.
This post aims to lay out some of the theory behind linear filters used in image processing. The aim is not to be as general or abstract as possible with the ideas, rather, to specialise towards implementation instead. Some Python code will be presented to illustrate how one can apply these filters.
Continue reading “An introduction to convolutions for image processing”