Ella Doran Camera Stack Tea Towel ($22) ❤ liked on Polyvore featuring home, kitchen & dining, kitchen linens, backgrounds, grey, grey kitchen towels and gray kitchen towels

Introduction

There are variable ways of human verification through out the world, as it is of great importance for all organizations, and different centers. Nowadays, the most important ways of human verification are recognition via DNA, face, fingerprint, signature, speech, and iris.

Among all, one of the recent, reliable, and technological methods is iris recognition which is practiced by some organizations today, and its wide usage in the future is of no doubt. Iris is a non identical organism made of colorful muscles including robots with shaped lines. These lines are the main causes of making every one's iris non identical. Even the irises of a pair of eyes of one person are completely different from one another. Even in the case of identical twins irises are completely different. Each iris is specialized by very narrow lines, rakes, and vessels in different people. The accuracy of identification via iris is increased by using more and more details. It has been proven that iris patterns are never changed nearly from the time the child is one year old through out all his life.

Over the past few years there has been considerable interest in the development of neural network based pattern recognition systems, because of their ability to classify data. The kind of neural network practiced by the researcher is the Learning Vector Quantization which is a competitive network functional in the field of classification of the patterns. The iris images prepared as a database, is in the form of PNG (portable network graphics) pattern, meanwhile they must be preprocessed through which the boundary of the iris is recognized and their features are extracted. For doing so, edge detection is done by the usage of Canny approach. For more diversity and feature extraction of iris images DCT transform is practiced.

2. Feature Extraction

For increasing the precision of our verification of iris system we should extract the features so that they contain the main items of the images for comparison and identification. The extracted features should be in a way that cause the least of flaw in the output of the system and in the ideal condition the output flaw of the system should be zero. The useful features which should be extracted are obtained through edge detection in the first step and the in next step we use DCT transform.

2.1 Edge Detection

The first step locations the iris outer boundary, ie border between the iris and the sclera. This is done by performing edge detection on the gray scale iris image. In this work, the edges of the irises are detected using the "Canny method" which finds edges by finding local maxima of the gradient. The gradient is calculated using the derivative of a Gaussian filter. The method uses two thresholds, to detect strong and weak edges, and includes the weak edges in the output only if they are connected to strong edges. This method is robust to additive noise, and able to detect "true" weak edges.

Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are typically affected by one or several of these effects: focal blur caused by a finite depth-of-field and finite point spread function, penumbral blur caused by shadows created by light sources of non-zero radius, shading at a smooth object edge, and local specularities or inter reflections in the vicinity of object edges.

2.1.1 Canny method

The Canny edge detection algorithm is known to many as the optimal edge detector. Canny's intentions were to enhance the many edge detectors already out at the time he started his work. He was very successful in achieving his goal and his ideas and methods can be found in his paper, "A Computational Approach to Edge Detection." In his paper, he followed a list of criteria to improve current methods of edge detection. The first and most obvious is low error rate. It is important that edges existing in images should not be missed and that there will be NO responses to non-edges. The second criterion is that the edge points should be localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely eliminate the possibility of multiple responses to an edge.

The Canny operator works in a multi-stage process. First of all the image is smoothed by Gaussian convolution. Then a simple 2-D first derivative operator (somewhat like the Roberts Cross) is applied to the smoothed image to highlight regions of the image with important spatial derivatives. Edges give rise to ridges in the gradient magnitude image. The algorithm then tracks along the top of these ridges and sets to zero all pixels that are not actually on the ridge top so as to give a thin line in the output, a process known as non-maximal suppression. The tracking process exhibits hysteresis controlled by two thresholds: T1 and T2, with T1> T2. Tracking can only begin at a point on a ridge higher than T1. Tracking then continues in both directions out from that point until the height of the ridge falls below T2. This hysteresis helps to ensure that noisy edges are not broken up into multiple edge fragments.

2.2 Discrete Cosine Transform

Like any Fourier-related transform, discrete cosine transforms (DCTs) express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform (DFT), a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sinusoids (in the form of complex exponentials). However, this visible difference is purely a consequence of a defect distinction: a DCT implies different boundary conditions than the DFT or other related transforms.

The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function f (x) as a sum of sinusoids, you can evaluate that sum at any x, even for x where the original f (x) was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function.

A discrete cosine transform (DCT) expresses a sequence of quite many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient (as explained below, fewer are needed to approximate a typical signal), whereas for differential equations the cosines express a particular choice of boundary conditions.

In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and output data are shifted by half a sample . There are eight standard DCT variants, of which four are common.

The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT"; its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data.

The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy data compression, because it has a strong "energy compaction" property. Most of the signal information tend to be concentrated in a few low-frequency components of the DCT.

3. Neural Network

In this work one Neural Network structure is used, which is Learning Vector Quantization Neural Network. A brief overview of this network is given below.

3.1 Learning Vector Quantization

Learning Vector Quantization (LVQ) is a supervised version of vector quantization, similar to Selforganizing Maps (SOM) based on work of Linde et al, Gray and Kohonen. It can be applied to pattern recognition, multi-class classification and data compression tasks, eg speech recognition, image processing or customer classification. As supervised method, LVQ uses known target output classifications for each input pattern of the form.

LVQ algorithms do not approximate density functions of class samples like Vector Quantization or Probabilistic Neural Networks do, but directly define class boundaries based on prototypes, a near-neighbor rule and a winner-takes-it-all paradigm. The main idea is to cover the input space of samples with 'codebook vectors' (CVs), each representing a region labeled with a class. A CV can be seen as a prototype of a class member, localized in the center of a class or decision region in the input space. A class can be represented by an arbitrarily number of CVs, but one CV represents one class only.

In terms of neural networks a LVQ is a feedforward net with one hidden layer of neurons, fully connected with the input layer. A CV can be seen as a hidden neuron ('Kohonen neuron') or a weight vector of the weights between all input neurons and the considered Kohonen neuron respectively.

Learning means modifying the weights in accordance with adapting rules and, therefore, changing the position of a CV in the input space. Since class boundaries are built piecewise-linearly as segments of the mid-planes between CVs of neighboring classes, the class boundaries are adjusted during the learning process. The tessellation induced by the set of CVs is optimal if all data within one cell indeed belong to the same class. Classification after learning is based on a presented sample of vicity to the CVs: the classifier assigns the same class label to all samples that fall into the same tessellation - the label of the cell