# INTRODUCTION he compression of an image is very useful in many important areas such as data storage, communication, computation purpose and neural network purpose. The neural networks are being well developed in software computing process. Noise suppression, transform extraction, Parallelism and optimized approximations are some main reasons that useful to artificial neural network for image compression method. The activities of image compression on neural networks implemented in Multi-Layer Perceptron (MLP) [2][3][4][5][6][7][8][9][10][11][12][13], learning vector quantization (LVQ), [14], Self-Organizing Map(SOM),Learning Vector quantization (LVQ) [15,16]. From these network methods, the Back propagation neural network is used for MLP process. In artificial neural network (ANN) uses, Back-Propagation algorithm processed in image compression method [3].The experts used a three-layer BPNN method for compression. The image is used for compression, it is divided into blocks and taken to input neurons, the neurons of input are compressed are taken at output of the hidden layer and the de-compressed images are stored in the output of the hidden layer. This process was implemented in the NCUBE parallel computer and the simulation results produced from network taken a poor image quality in 4:1 compression ratio [3]. By using single network for compression of an image, the result produced from a single network one simple BPNN are poor one. The researches try to increase the performance of an image in neural-network based compression technique. The compress/decompress (CODEC) image blocks are used on various methods for different image blocks regarding to the complexity of blocks. The results produced from image compression are good with neural networks. The cluster of an image blocks into some basic classes based on a complexity measure called activity. The researchers used four BPNNs with different compression rates for each class with neural network. It produces more benefit improvement over basic BPNN. The adaptive approach with proposed the use of complexity measure with block orientation by six BPNNs has given better visual quality [11]. The BPNNs were used for compressing image blocks, after that each pixel in a block was subtracted from the mean value of the block. This method gives some Best-SNR method is used to select the network that gives the best SNR for the block of an image. The overlapping of image blocks in a particular area is used in order to reduce the chess-board effect in de-compressed image. The Best-SNR methods in PSNR produce the visual quality of reconstructed image compared to standard images in JPEG coding. This paper is taken as follows. In section II we discuss multi-layer neural network for image compression. Section III describes the Modified-Levenberg method used in this paper. In section IV, the experimental results of our implementations are taken and discussed and finally in section V we conclude this research and give a summary on it. # II. # IMAGE COMPRESSION USED WITH MULTI-LAYER NEURAL NETWORKS The image compression used with Backpropagation algorithm in multi-layer neural network. The multi-layer neural network is given in Fig. 1. It taken the network with three layers, input, hidden and output layer. Both the input and output layers have the same number of neurons, N. The input and output are connected to each network; the compression can be done with the value of the neurons at the hidden layer. In compression methods, the input image is divided into blocks, for example with 8×8 , 4× 4 or 16 ×16 pixels the block sizes of neurons in the input/output layers which convert to a column vector and fed to the input layer of network; one neuron per pixel. With this basic MLP neural network, compression is conducted in training and application phases as follow. # 1) Training In image compression, the image samples are used to train each network with the back propagation learning rule. In network, the output layer of network will be equal to the input pattern with each layer in a narrow channel. The normalized gray level range, training samples of blocks are converted into vectors. In compression and de-compression can be given in the following equations. = (1) = (2) In the above equations, f and g are the activation functions which can be linear or nonlinear. ij V and ji W represent the weights of compressor and decompress or, respectively. The extracted N × K transform matrix in compressor and K × N indecompressor of linear neural network are in PCA transform. It minimizes the mean square error between original and reconstructed image. The new spaces are decorrelated led to better compression. For datadependent transform by using linear and nonlinear activation functions in this network results linear and non-linear PCA respectively. In training process of the neural network structure in Fig. 1 is iterative and stopped when the weights convert to their true values. In real applications the training is stopped when the error of equation ( 3) reaches to a threshold or maximum number of iterations limits the iterative process. (3) # 2) Application When training process is completed and the coupling weights are corrected and the test image is fed into the network and compressed image is obtained in the outputs of hidden layer. The outputs must be applied to the correct number of bits. The same number of total bits is used to represent input and hidden neurons, and then the Compression Ratio (CR) will be the ratio of number of input to hidden neurons. For example, to compress an image block of 8×8, 64 input and output neurons are required. In this case, if the number of hidden neurons is 16 (i.e. block image of size 4× 4), the compression ratio would be 64:16=4:1. But for the same network, if 32 bits floating point is used for coding the compressed image, then the compression ratio will be 1:1, which indicates no compression has occurred. In general, the compression ratio of the basic network is illustrated in Fig ( 1) for an image with n blocks is computed as Eq. ( 4). When training with the LM method, the increment of weights Î?"w can be obtained as follows: (5) Where J is the Jacobian matrix, ? is the learning rate which is to be updated using the ? depending on the outcome. In particular, ? is multiplied by decay rate ? (00 for all i therefore the matrix will be invertible it leads to Levenberg-Marquardt algorithm. STEP 5: For learning parameter, ? is illustrator of steps of actual output movement to desired output. In the standard LM method, ? is a constant number. This paper modifies LM method using ? as: ? = 0.01eT e Where e is a k ×1 matrix therefore eTe is a 1 × 1 therefore [JTJ+?I] is invertible. For actual output is taken for desired output or errors. The measurement of error is small then, actual output approaches to desired output with soft steps. Therefore error oscillation reduces. IV. # RESULTS AND DISCUSSION # CONCLUSION A picture can say more than a thousand words. However, storing an image can cost more than a million words. This is not always a problem because now computers are capable enough to handle large amounts of data. However, it is often desirable to use the limited resources more efficiently. For instance, digital cameras often have a totally unsatisfactory amount of memory and the internet can be very slow. In these cases, the importance of the compression of image is greatly felt. The rapid increase in the range and use of electronic imaging justifies attention for systematic design of an image compression system and for providing the image quality needed in different applications. There are for image compression. Image compression using neural network technique is efficient when referring to the literature. In this thesis the use of Multi 1![Fig.1-Basic image compression structure using neural network](image-2.png "Fig. 1 -") ![[w1, w2? wN] consists of all weights of the network, e is the error vector comprising the error for all the training examples.](image-3.png "") 3![Solve (2) to obtain the increment of weights Î?"w 4. Recomputed the sum of squared errors F(w) Using w + Î?"w as the trial w, and judge IF trial F(w) < F(w) in step 2 THEN w = w + Î?"w ? = ? ? ? (? = .1) Go back to step 2 ELSE Go back to step 4 END IF 1) Modification Of The LM Method](image-4.png "3 .") ![](image-5.png "") BIRD 256 IMAGE SIZE21.4117 LEVENBERG-MARQUARDT 152.5243 765.969000 METHOD22.8879 LEVENBERGMARQUARDT 108.57 MODIFIED550.265000METHODIMAGEPSNRMSETIME(SECONDS)PSNRMSETIME(SECONDS)LENA21.7006161.48953303.369822.3675148.76772169.579000PEPPER be accepted to obtain better reconstructed image 15.0934 188.6425 3411.37500015.3527172.43122151.516000BABOON quality. Comparing results with basic BNN algorithm 13.9517 195.6905 3614.43700016.4312123.05982376.734000CROWD shows better performance for the proposed method 14.3570 301.0073 4065.1720015.7204322.28302208.1410BIRD both with PSNR measure and visibility quality. 25.8375 54.5497 3112.78100026.031255.63872056.3698 March 2011 March 2011©2011 Global Journals Inc. (US) March 2011©2011 Global Journals Inc. (US) March 2011 A Study on Image Compression with Neural Networks Using Modified Leenberg Method ©2011 Global Journals Inc. (US) This page is intentionally left blank * RCGonzales REWoods Digital Image Processing Prentice-Hall 2002 Second Edition * HVeisi MJamzad Image Compression Using Neural Networks,Image Processing and Machine Vision Conference (MVIP) Tehran, Iran 2005 * NSonehara MKawato SMiyake * Image compression using a neural network model KNakane International Joint Conference Neural Networks Washington DC 1989 * Artificial neural network for image compression, Electronic letters 26 LMarsi 1990 * Improved neural structure for image compression SMarsi GRamponi GLSicuranza Proceeding of International Conference on Acoustic Speech and Signal Processing eeding of International Conference on Acoustic Speech and Signal essingTorento 1991 * Improved structures based on neural networksfor image compression SCarrato GRamponi IEEE Workshop on Neural Networks for Signal Processing September 1991 * Learning internal representations by error propagation DERumelhart GEHinton RWilliams Parallel Distributed Processing Cambridge, MA MIT Press 1 * Learning representations by backpropagating errors DERumelhart GEHinton RWiliams Nature 323 1986 * Back-propagation: Past and future PJWerbos Proceeding of International Conference on Neural Networks eeding of International Conference on Neural NetworksSan Diego, CA, 1 1988 * Training feed forward network with the Marquardt algorithm MTHagan MBMenhaj IEEE Trans. on Neural Net 5 6 1994 * Enhanced training algorithms, and Integrated training/architecture selection for multi layer perceptron networks MGBello IEEE Trans. on Neural Net 3 1992