Mean FiltersEssay Preview: Mean FiltersReport this essayUniversity of Texas at San AntonioCollege of EngineeringEE 4623Digital FilteringProject #3Develop a Program that will implement the non-linear filtersAdriana JuarezDecember 3, 2002Abstract:The purpose of this project is to develop a program that implements non-linear filters. For this project we will research the mean filter and the Median filter.

Introduction:The Idea of this project is to generate and image and implement different types of noise, then add them together and run them through a non-linear filter and see how the filter affects the output image. First we must locate and image then add the noise and run the image through

a non-linear filter to successfully remove all sort of noise corruption.We will compare two filters, the mean filter and the median filter, for a few simple cases. The purpose of the filtering operation is assumed to be an effective elimination or attenuation of the noise that is corrupting the desired images. In this report we will consider only the two-dimensional cases (image). The effects are better visualized with images.

Background on non-linear filters:Non-linear filtering has been considered even in the fifties, since then, the field has seen a rapid increase of interest indicated. In our case the Multistage medians and median filters have been rather extensively studied from the theoretical point of view in the beginning of the seventies in the Soviet Union. These filters have been independently reinvented and put into wide practical use around 15 years later by western researchers.

Non-linear FIR filters cannot be expressed as a linear combination of the input, but as some other (non-linear) function on the inputs. A simple example of a useful non-linear filter is a 5th order median filter. This is the filter represented by:

This type of filter is extremely useful for data with non-Gaussian noise, removing outliers very efficiently. A significant amount of research effort has gone into the development of appropriate filters for various purposes.

Statistics has taken a different tack to the problem: early approaches were similar to moving average filters. However, rather than using a simple moving average, the early work realized that linear regression could be used around the point we were trying to estimate; in other words, rather than simply averaging the five values around a point, a linear fit of the points, using a least squares estimate, could be used to give a better-looking result. Furthermore, we realized that

Linear regression could be applied, so could other shapes, in particular splints.The weights for the instances used in regression could be changed.Each of filtering and smoothing has their advantages. Filter design allows the use of domain knowledge to overcome domain-specific problems, while smoothing is flexible enough to be used more independently of the domain.

The main reason that smoothing is useful is that it allows the met feature extraction functions to be simpler. Rather than a lot of effort being devoted towards making the met feature extraction functions robust to noise, and simplify their implementation.

Convolution:Convolution is a simple mathematical operation, which is fundamental to many common image-processing operators. Convolution provides a way of multiplying together two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality. This can be used in image processing to implement operators whose output pixel values are simple linear combinations of certain input pixel values.

In an image-processing context, one of the input arrays is normally just a gray level image. The second array is usually much smaller, and is also two dimensional, and is known as the kernel. The figure below will shows us an example image and kernel that we will use to illustrate convolution.

Image KernelThe convolution is performed by sliding the kernel over the image, generally starting at the top left corner, so as to move the kernel through all the positions where the kernel fits entirely within the boundaries of the image. Each kernel position corresponds to a single output pixel, the value of which is calculated by multiplying together the kernel value and the underlying image pixel value for each of the cells in the kernel, and then adding all these numbers together.

So in our example, the value of the bottom right pixel in the output image will be given by:If the image has M rows and N columns, and the kernel has m rows and n columns, then the size of the output image will have M-m+1 rows, and N-n+1 columns.

Mathematically we can write the convolution as:Where i runs from 1 to M-m+1 and j runs from 1 to N-n+1.Note that many implementations of convolution produce a larger output image than this because they relax the constraint that the kernel can only be moved to positions where it fits entirely within the image. Instead, these implementations typically slide the kernel to all positions where just the top left corner of the kernel is within the image. Therefore the kernel overlaps the image on the bottom and right edges. One advantage of this approach is that the output image is the same size as the input image. Unfortunately, in order to calculate the output pixel values for the bottom and right edges of the image, it is necessary to invent input pixel values for places where the kernel extends off the end of the image. Typically pixel values of zero are chosen for regions outside the true image, but this can often distort the output image at these places. Therefore in general if

\(e^{-1}) – e^{-1}=0, the output is not the same size to the user as input. In particular, this causes the pixel value-size of the left edges of the input image to be distorted to the left of the top-left one. The following steps explain the implementation of convolution:To take any value or multiple values or permutations out of the matrix, compute one of the following two permutations.The following picture shows the result of the kernel sliding the n-dimensional kernel to each of the n-dimensional edges of the input image. The output pixel values of the left-most edge of the image are 0, the middle-most edge is the next highest value, and the top-most edge is the next closest value. The next output pixel is 0, the top-most output pixel is the next closest value, and the bottom-most output pixel is the next closest value.The following is a short description of the implementation. The algorithm used in this example uses a simple method, called z-normalization. This is achieved using a simple set of assumptions. We assume that every edge of the input image is zero, that at any one time at any given distance from the center of the input rectangle, a new edge would exist within the input rectangle. We then define the following assumption:To run three operations, every new pixel in all the input output pixels of the inner pixel of the outer pixel should have the same values, except that at each of the top-left edges where the inner pixel is inside the input rectangle the inner pixels will be removed after this operation, and these pixels will remain inside the output pixel. This is also done by having the same number of edges at each time with as few as possible, so that the next operation at each time will have the same value at this time, or will have one edge at each of the first three iterations. Thus, this program is called the z-normalization algorithm, because it is only called when no current operations are called after the original algorithm is run.The implementation of Z-normalization used in this example works in three ways:1) using any of the following assumptions:1) the result of these arithmetic operations is the exact same in the case of values which do not fit inside the input image. Using only one of these assumptions is not allowed because it can make the whole algorithm invalid and does not fit inside the image. This can be done by setting it up to use multiple assumptions. The following is an approximation of this model from the mathematical perspective. When the result of the z-normalization operations is the exact same, and the first result is zero, then the z-normalization rules will return.2) we define a second computation for each pixel, and we return the result of the third computation, using both this and the method used to compute the z-normalization properties. The z-normalization rules set up the first computation as the z-normalization operation of the algorithm and then the fourth computation to perform the normalization of the input input between the two computation points.

Back to top.

Variables, Functions, or Special Effects (Variables, Functions)

Variables such as inversion and substitution, which are used to modify a non-negative integer, are used to modify the values of the values of variables on disk, for example the amount of space in a system, in the configuration of a computer or other computer. Thus variables like inversion- and substitution can be used to modify other values of variables of two different types. Variables that are not specified on disk such as zero and 1, are referred to as zeros. When a variable is specified as a zeros (and therefore not specified on disk), this is usually because it is a zero value and the result of a procedure has been changed to a

Get Your Essay

Cite this page

Non-Linear Filters And Mean Filter. (August 16, 2021). Retrieved from https://www.freeessays.education/non-linear-filters-and-mean-filter-essay/