# INTRODUCTION Artificial Neural Networks (ANNs) are non-linear mapping structures based on the function of the human brain. They are powerful tools for modeling, especially when the underlying data relationship is unknown. ANNs can identify and learn correlated patterns between input data sets and corresponding target values. ANNs imitate the learning process of the human brain and can process problems involving non-linear and complex data even if the data are Imprecise and noisy. An ANN is a computational structure that is inspired by observed process in natural networks of biological neurons in the brain. It consists of simple computational units called neurons, which are highly interconnected. ANNs have become the focus of much attention, largely because of their wide range of applicability and the ease with which they can treat complicated problems. ANNs are parallel computational models comprised of densely interconnected adaptive processing units. [1] These networks are fine-grained parallel Implementations of nonlinear static or dynamic systems. A very important feature of these networks is their adaptive nature, where -learning by example? replaces -pro gramming? in solving problems. This feature makes such computational models very appealing in application domains where one has little or incomplete understanding of the problem to be solved but where training data is readily Available. ANNs are now being increasingly recognized in the area of classification and prediction, where regression model and other related statistical techniques have traditionally been employed. Radial basis function (RBF) neural network consist of three layers, an input, a hidden and an output. It has a feed forward structure consisting of a single hidden layer of J locally tuned units, which are fully interconnected to an output layer of L linear units. All hidden units simultaneously receive the n-dimensional real-valued input vector X .The hidden-unit outputs are not calculated using the weighted-sum mechanism/sigmoid activation; rather each hidden-unit output is obtained by closeness of the input X to an n-dimensional parameter vector Cj associated with the jth hidden unit. [13] The response characteristics (activation function) of the jth hidden unit (j = 1, 2, .. J) is assumed as, The Parameter ?j is the width of the receptive field in the input space from unit j. This implies that has an appreciable value only when the distance is smaller than the width ?j.. , where n is sufficiently small. These approximation problems include classification problems as a special case. In the present work we have used a Gaussian basis function for the hidden units. RBF networks have been successfully applied to a large diversity of applications including interpolation, chaotic time-series modeling, system identification, control engineering, electronic device parameter modeling, channel equalization, speech recognition, image restoration, shape-from-shading, 3-D object modeling, motion estimation and moving object segmentation etc [7]. # III. TRAINING OF RBF NEURAL NETWORKS By means of training, the neural network models the underlying function of a certain mapping. In order to model such a mapping we have to find the network weights and topology. There are two categories of training algorithms: supervised and unsupervised. In supervised learning, the model defines the effect one set of observations, called inputs, has on another set of observations, called outputs. In other words, the inputs are assumed to be at the beginning and outputs at the end of the causal chain. The models can include mediating variables between the inputs and outputs. In unsupervised learning, all the observations are assumed to be caused by latent variables, that is, the observations are assumed to be at the end of the causal chain. In practice, models for supervised learning often leave the probability for inputs undefined. [1] RBF networks are used mainly in supervised applications. In a supervised application, we are provided with a set of data samples called training set for which the corresponding network outputs are known. RBF networks are trained by i. deciding on how many hidden units there should be ii. deciding on their centres and the sharpnesses (standard deviation) of their Gaussians iii. Training up the output layer. In training phase, a set of training instances is given. A feature vector typically describes each training instances. It is further associated with the desired outcome, which is further represented by a feature vector called output vector. Starting with some random weight setting, the neural net is trained to adapt itself by changing weight inside the network according to some learning algorithm. When the training phase is complete the weights are fixed. The network propagates the information from the input towards the output layer. When propagation stops, the output units carry the result of the inferences. we can understand how this network behaves by following an input vector p through the network to the output a. we present an input vector to such a network, each neuron in the radial basis layer will output a value according to how close the input vector is to each neuron's weight vector. Thus, radial basis neurons with weight vectors quite different from the input vector p have outputs near zero. These small outputs have only a negligible effect on the linear output neurons. In contrast, a radial basis neuron with a weight vector close to the input vector p produces a value near 1. If a neuron has an output of 1, its output weights in the second layer pass their values to the linear neurons in the second layer. In fact, if only one radial basis neuron had an output of 1, and all others had outputs of 0's (or very close to 0), the output of the linear layer would be the active neuron's output weights. This would, however, be an extreme case. Typically several neurons are always firing, to varying degrees. if it's output is not 1 then weight is adjusted according to training algorithm used & weight is updated till desired valued matched with target value. V. # SIMULATION RESULTS The proposed design was coded in VHDL. It was functionally verified by writing a test bench and simulating it using ISE simulator and synthesizing it on Spartan 3A using Xilinx ISE 9.2i. 1![Fig. 1: Schematic representation of neural network II.](image-2.png "Fig. 1 :") ![Fig2: Feed Forward Neural Network RBF networks are best suited for approximating continuous or piecewise continuous real-valued mapping.](image-3.png "Fig2:") 34![Fig 3: Block Diagram Learning law describes the weight vector for the ith processing unit at time instant (t+1) in terms of the weight vector at time instant (t) as follows; w i ( t ??1) ??w i ( t ) ?????w i ( t ) , Where ?w i (t) is the change in the weight vector. The Networks adapt change the weight by an amount proportional to the difference between the desired output and the actual output. As an equation: ? Wi = ? * (D-Y).Xi Here E=D-Y The perceptron learning rule can be written more succinctly in terms of the error E and the change to be made to the weight vector Wi CASE 1-If E = 0, then make a change W equal to 0. CASE 2-If E = +, then make a change w i ( t ??1) ??w i ( t ) ???w i ( t ) CASE 3-. If E = -1, then make a change w i ( t ??1) ??w i ( t ) ???w i ( t ) , Where ? is the learning rate, D is the desired output, Y is the actual output, and Ii is the ith input. The weights in an ANN, similar to coefficients in a regression model, are adjusted to solve the problem presented to ANN. Learning or training is term used to describe process of finding values of these weights. Two types of learning with ANN are supervised and unsupervised learning. An important issue concerning supervised learning is the problem of error convergence, i.e. the minimization of error between the desired and computed unit values. The aim is to determine a set of weights which minimizes the error. IV. IMPLEMENTATION Parameter ?=0.8; Weight vector=8;](image-4.png "Fig 3 :Fig: 4") 5![Fig 5: Radial Basis Functionwe can understand how this network behaves by following an input vector p through the network to the output a. we present an input vector to such a network, each neuron in the radial basis layer will output a value according to how close the input vector is to each neuron's weight vector. Thus, radial basis neurons with weight vectors quite different from the input vector p have outputs near zero. These small outputs have only a negligible effect on the linear output neurons. In contrast, a radial basis neuron with a weight vector close to the input vector p produces a value near 1. If a neuron has an output of 1, its output weights in the second layer pass their values to the linear neurons in the second layer. In fact, if only one radial basis neuron had an output of 1, and all others had outputs of 0's (or very close to 0), the output of the linear layer would be the active neuron's output weights. This would, however, be an extreme case. Typically several neurons are always firing, to varying degrees. if it's output is not 1 then weight is adjusted according to training algorithm used & weight is updated till desired valued matched with target value.](image-5.png "Fig 5 :") 6![Fig 6: Simulation Results](image-6.png "Fig 6") * Bors -In troduction of the Radial Basis Function (RBF) Networks? Department of AdrianG UK Computer Science University of York YO10 5DD * Syst olic array for nonlinear multidimensional interpolation using radial basis functions DSBroomhead RJones JGMcwhirter TJShepherd ? Electronics Letters 26 1 7 1990 * AGBors IPitas M edian radial basis functions neural network 1996 7 * Nonlinear prediction of chaotic time series MCasdagli Physica D 35 1989 * Or thogonal least squares learning Algorithm for radial basis function networks SChen CF NCowan PMGrant ? IEEE Trans. On Neural Networks 2 1991 * LDouglas Perry 2006 McGraw-Hill HDL Programming by Example? * A New Construction Algorithm of efficient Radial Basis Function Neural Net Classifier and its Application to codes Identification? FBelloir AFache ABillat 1998 51687 * Stoc hastic choice of radial basis functions in adaptive function approximation and the functional-link net BIgelnik Y.-HPao ? IEEE Trans. on Neural Networks 6 6 1995 * Reformulated radial basis neural networks trained by gradient descent NBKarayiannis ? IEEE Trans. on Neural Networks 10 3 1999 * Self-organization and associative memory TKKohonen 1989 Springer-Verlag Berlin * Pr ogrammable Logic Design Quick Start Hand Book? KarenParnell & NickMehta Xilinx 2002 * MTMusavi WAhmed KHChan KBFaris DMHummels On the training of radial basis function classifiers,? Neural Networks 1992 5 * Application of a radial basis function neural network for diagnosis of diabetes mellitus? PVenkatesan* SAnitha Tuberculosis Research Centre, ICMR, Chennai 600 31 * Satish Kumar -Neural Networks -A classroom approach TMH * ? Hierarchical Radial Basis Function NeuralNetworks for Classification Problems? International Journal of Neural Systems Yuehui Chen1, Lizhi Peng1, and Ajith Abraham 14 2 2004