In the last post, we saw how convolutions could be used to sharpen an image by looking for areas of contrast. Using a similar intuition, we now consider ways to detect edges in an image. Detecting edges can be an important building block for tasks processing tasks like feature detection, and extraction.

While many different methods, and models exist for detecting edges, one simple way is to look for areas of fast changes in pixel gradient, i.e., looking at derivatives (calculus) in an image. The idea is that an edge is more likely occur where there are large sudden changes in contrast.

# First derivative

The first derivative tells us that the rate of change is the most when the first derivative is at a maximum. Since we are dealing with the real world, we cannot calculate a precise derivative for our image at each point. Instead, we will approximate it by taking a finite difference between two points locally, i.e.,

Since directions matter, one needs to consider the direction of change in our 2D image. In practice, only the derivatives along the x, and y axes are computed.

To emphasize the edges in the vertical direction, we would compute differences along the x axis (to estimate the change in y). Similarly, if we are looking for edges along the horizontal direction, we would compute differences along the y axis.

An example 3×3 kernel with uniform weights (*Prewitt kernel*) might look like:

To give further emphasis around the edge region of the current pixel being considered, one could apply a larger weight to the difference of its neighbours.

For example (*Sobel kernel*):

In practice the Sobel kernel gives better edge highlights than the Prewitt kernel.

**Original**

**3×3 Sobel kernel along x axis (vertical lines)**

**3×3 Sobel kernel along y axis (horizontal lines)**

# Second derivative

A similar game can be played for things involving the second derivative. We know the first derivative is at a maximum when the second derivative is zero. One mathematical operator that captures this is the Laplacian:

This can be approximated using a finite difference method as:

Translating this to a convolution kernel:

To account for diagonal directions, one could also use:

**3×3 Laplacian kernel (x and y directions)**

Which edge detector will perform better for your image depends on the edge profile of your image. Derivative based filters are often sensitive to noise, so a de-noising step like a blur is often desirable somewhere in your processing pipeline.