# Explain Edge detection in detail. What are the most commonly used edge detection methods?

Medium Last updated on May 7, 2022, 1:33 a.m.

Edge detection is a technique of image processing that aims at identifying sharp changes in the brightness of an image. When these discontinuities or sharp changes occur, it means that it could be either the start or end of a pattern and the sudden changes in pixel intensity leads to edges. The motive of detecting and measuring these discontinuities in an image is to recognize the interest points, which can be further used to solve a bigger objective. These discontinuities can be of any form such as shown in figure 1 below.

The mathematical way of detecting these sharp changes in a function is to take the help of calculus(derivatives). Using derivatives, we can easily detect the maxima and minima of any function.

Before we dive into details, we need to understand how we can measure the change of intensity in an image with the help of derivatives. As we know image consists of pixel intensity values, Let’ ‘s take an example of a grayscale image, making the image as $f(x,y)$ a 2-D function. Therefore,

• Gradient of image is $\nabla f = [\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}]$

• Gradient direction is $\theta = tan^{-1} (\frac{\partial f}{\partial x}/ \frac{\partial f}{\partial y})$

• Edge strength is given by the gradient magnitude $\sqrt{(\frac{\partial f}{\partial x})^2+(\frac{\partial f}{\partial y})^2}$

we can compute $\frac{\partial f}{\partial x}$ as:

$$\frac{\partial f}{\partial x} = lim_{h-> 0 } \frac{f(x+h)-f(x)}{h}$$
For discrete data, we can approximate using finite differences, making the function to be $\frac{\partial f}{\partial x} \approx \frac{f(x+1)-f(x)}{1}$, which is essentially intensity subtraction and it can be efficiently implemented using [-1,1] filter.

But, before we use the derivative to identify the minima and maxima of an image, we need to denoise the image in order to avoid the unnecessary intensity changes. The optimal method of denoising is using a Gaussian kernel $(g)$ filter. Now, our derivative function will turn into $\frac{\partial (f*g)}{\partial x}$.

Here, Differentiation is convolution, and convolution is associative, therefore our reduced operation equation will become:

$$\frac{\partial (f g)}{\partial x} = f \frac{\partial (g)}{\partial x}$$

As a result, we can compute derivatives over our denoising filter (Gaussian kernel), and use the obtained matrix for convolution over the Image function to obtain derivatives.

Based on the similar concept, some of the traditional edge detection techniques were discovered such as:
1. Prewitt Filter
2. Sobel Filter
3. Canny Edge Detector, and
4. Laplacian Filter

1. Prewitt Filter: Prewitt operator is a differentiation operator, It works by approximating the gradient of the image intensity function. Prewitt has gained a lot of popularity in modern applications because of its ability to work on complex images effortlessly and not take up many resources in terms of computations.

2. Sobel Edge Detection: Sobel edge detection uses a filter that adds a lot of emphasis towards the center of each horizontal or vertical edge it detects. This makes the algorithm effective in reducing noise and the ability to differentiate edges.In the case of Sobel, we calculate two derivatives: one for the horizontal changes and the other to compute the vertical changes. If the image in consideration is ‘I’, the two derivatives are calculated as follows:

By combining both of the above results, we can approximate the gradient using $G = |G_x| + |G_y|$.

3. Canny Edge Detector: Canny edge detection is also a widely used edge-detection methodology that has applications across multiple domains, due to its robustness and flexibility.

There are four main steps to process edges with canny:

• Noise Reduction: Denoise the image using a blur filter.

• Calculating Intensity Gradient: Canny edge detection actually makes use of a Sobel kernel in both the horizontal and the vertical direction after the image has been blurred (smoothed). The result of the filtering operations is later used to understand the intensity gradient magnitude $G = \sqrt{{G_{x}}^2 + {G_y}^2}$ and direction $\theta= tan^{-1} \frac{G_x}{G_y}$ for every pixel in the image.

• Suppressing False Edges using non-maximum edge suppression: The aim here is to filter out the unwanted pixels in the image which could very well be the pixels that do not constitute an edge. The simple way to accomplish this is to make sure we compare every pixel with its neighbor in both the positive and the negative gradient direction. If the magnitude of the gradient of the quizzed pixel is less than that of its neighbors, it gets set to zero. Else, if it is higher, it remains unchanged.

• Hysteresis Thresholding: In hysteresis thresholding, we use two threshold values and compare that with the gradient magnitudes. If the gradient magnitude value is larger than the assigned threshold values, then this means that the pixels in consideration have strong edges. If the gradient magnitudes are lower than the smallest threshold value, these pixels are excluded from the final edge detection result.

4. Laplacian Edge Detection: Laplacian edge detection method is a second-order derivative filter measuring the rate at which the first derivatives change. It is particularly good at finding the fine details of an image. Any feature with a sharp discontinuity will be enhanced by a Laplacian operator.

Similar to the abovementioned filters, a second derivative measurement on the image is also very sensitive to noise, therefore we need to denoise it using a Gaussian kernel. Therefore, Laplacian is actually a second-order derivative of a gaussian kernel also commonly known as LoG. We can also intuitively learn about the outcome of LoG over an Image as:

• Zero pixel value, if at a long distance from the edge,
• Positive pixel value, if located to one side of the edge,
• Negative pixel value, if located to the other side of the edge,