# Introduction mage compression is an important issue in digital image processing and finds extensive applications in many fields. This is the basic operation performed frequently by any digital photography technique to capture an image. For longer use of the portable photography device it should consume less power so that battery life will be more. To improve the Conventional techniques of image compressions using the DCT have already been reported and sufficient literatures are available on this. The JPEG is a lossy compression scheme, which employs the DCT as a tool and used mainly in digital cameras for compression of images. In the recent past the demand for low power image compression is growing. As a result various research workers are actively engaged to evolve efficient methods of image compression using latest digital better quality of reproduction of image with a low power consumption. Keeping these objectives in mind the research work in the present paper has been undertaken. In sequel the following problems have been investigated. Image processing is a very significant necessity in medical applications. The images keep the records of different tests conducted on the body of the patient .Storage of. Medical records of the patients is always in the form of images. The storage time should be minimum and also the accessing time should be minimum. During the image transmission and reception, the storage space and the storage time is desired to be minimum. But this condition needs to be obtained with a high information quality in the data. For reducing the storage time, the data needs to be compressed. With time many different compression methods, algorithms and file formats were developed. In still images compression there are many different approaches and each one of them produces many compression methods. However all techniques prove to be useful only in a limited usage area. Of course, image compression methods are also much desired or even necessary in medicine. The data and information are two different things .The information is the content and the data is the representation of the information. The compression of the data should not effect the information content of the data. Reducing the accessing time and storage time by means of data compression should not cause loss to the information content. Compression is generally divided into compression and decompression. Compression is the technique for compressing the data for reducing the storage time and area. Decompression on the other hand is the reconstruction of the original image from the compressed image. There can be distinguished two types of compression: lossless and lossy. In lossless compression methods, the data set reconstructed during decompression is identical as the original data set. In lossy methods, the compression is irreversiblethe reconstructed data set is only an approximation of the original image. At the cost of lower conformity between reconstructed and original data, better the loss of information caused by compressiondecompression is invisible for an observer. Image analysis, noise elimination, may reveal that the compression actually was not lossless. There are many ways to calculate the effectiveness of the compression. The most often used factor for this purpose is compression ratio (CR), which expresses the ability of the compression method to reduce the amount of disk space needed to store the data. Compression on any digital and analog images will be of two types. ( # 1) Lossless compression (2) Lossy compression (3) Fractal Compression a) Lossless compression Lossless compression method comprises of two phases -modeling and coding. Creation of a method boils down to specification how those two phases should be realized. The modeling phase builds a model for the data to be encoded, which best describes information contained in this data. The coding phase is based on a statistical analysis and strives after the shortest binary code for a sequence of symbols obtained from the modeling phase. Three groups are distinguished in lossless compression methods: ? Entropy-coding, ? Dictionary-based, ? Prediction methods. The entropy coding includes Shannon-Fao coding, Huffman coding, Golomb coding, Unary coding, Truncated binary coding, Elias coding .The dictionarybased includes Lempel-Ziv-Welch (LZW) coding, LZ77 and LZ78, Lempel-Ziv-Oberhumer algorithm. The prediction methods includes JPEG-LS and Lossless JPEG2000 algorithms. # b) Lossy compression The lossy compression methods reduce the information of the image to be encoded up to some level that is acceptable by a particular application field. In lossy compression algorithms, two obligatory phases can be distinguished: quantization and lossless compression. This means that the quantization is the key issue for lossy methods. Before the quantization, one more phase can be found --decomposition, which is optional, but very frequently used because it allows one to create more effective quantization algorithms. The goal of the decomposition is to build a representation of the original data that will enable more effective quantization and encoding phases. # c) Fractal Compression It is another type of lossy compression. Thus compression may be lossy or lossless. The principle of image compression algorithms are (i) reducing the redundancy in the image data and (or) (ii) producing a reconstructed image from the original image with the introduction of error that is insignificant to the intended applications. The aim here is to obtain an acceptable representation of digital image while preserving the essential information contained in that particular data set. First the original digital image is usually transformed into another domain, where it is highly decorrelated by using some transform. This de correlation concentrates the important image information into a more compact form. The compressor then removes the redundancy in the transformed image and stores it into a compressed file or data stream. In the second stage, the quantization block reduces the accuracy of the transformed output in accordance with some preestablished fidelity criterion. Also this stage reduces the psycho-visual redundancy of the input image. Quantization operation is a reversible process and thus may be omitted when there is a need of error free or lossless compression. In the final stage of the data compression model the symbol coder creates a fixed or variable-length code to represent the quantizer output and maps the output in accordance with the code. Generally a variable-length code is used to represent the mapped and quantized data set. It assigns the shortest code words to the most frequently occurring output values and thus reduces coding redundancy. The operation in fact is a reversible one. The decompression reverses the compression process to produce the recovered image as shown in figure above. The recovered image may have lost some information due to the compression, and may have an error or distortion compared to the original image. But the same compressed image should also be reconstructed back to its original image. In this paper we have proposed two algorithms that will be used in image reconstruction problem. The two algorithms are named as: (1) Repetitive Loss-Thresholding algorithm (RLTA) and (2) Modified RLTA(MRLTA) II. # Litereture Survey In 1994, S. Martucea, in the paper "Symmetric convolution and the discrete sine and cosinetransform", addresses the problem of reducing the amount of data required to represent the digital image. Compression is achieved by the removal of one or more of three basic data redundancies Coding redundancy, which is present when less than optimal code words are used; Inter pixel redundancy, which results from correlations between the pixels of an image & psycho visual redundancy which is due to data that is ignored by the human visual system. Huffman codes contain the smallest possible number of code symbols per source symbol) subject to the constraint that the source symbols are coded one at a time. So, Huffman coding when combined with technique of reducing the image redundancies using Discrete Cosine Transform helps in compressing the image data to a very good extent. The Discrete Cosine Transform is an example of transform coding. The current JPEG standard uses the DCT as its basis. The DC relocates the highest energies to the upper left corner of the image. The lesser energy or information is relocated into other areas. The DCT is fast. It can be quickly calculated and is best for images with smooth edges like photos with human subjects. The DCT coefficients are all real numbers unlike the Fourier Transform. The Inverse Discrete Cosine Transform can be used to retrieve the image from its transform representation. In 1989 N. Ahmed, T. Natarajan, and K. R. Rao, in the paper, "Discrete Cosine Transform" discussed about DCT.A discrete cosine transform (DCT) is defined and an algorithm to compute it using the fast Fourier transform is developed. It is shown that the discrete cosine transform can be used in the area of digital processing for the purposes of pattern recognition and Wiener filtering. Its performance is compared with that of a class of orthogonal transforms and is found to compare closely to that of the Karhunen-Loève transform, which is known to be optimal. The performances of the Karhunen-Loève and discrete cosine transforms are also found to compare closely with respect to the rate-distortion criterion. In 2008 in the paper "Context based medical image compression with application to ultrasound images" the authors Ansari, M.A.; and Anand, R.S., in their paper discussed about compression on context based medical image. Compression is very much essential for medical images that need to reduce the transmission as well as storage time and cost. In this paper a context based coding is done where the rate of compression is better than other JPEG compression methods. The input image is encoded with low rate and the background with high compression rate. The results showed that very high high compression rate with better quality is obtained compared to the previous results. In October 2013, Bhavani, S. and Thanushkodi, K.G., in their paper "Comparison of fractal coding methods for medical image compression," developed a novel quasi-lossless fractal coding scheme. Hay have used the fractal compression scheme for compression of the medical images. In their work they have considered good quality portion of the picture as the domain part and the remaining parts of the image are obtained from it. Thus they mostly gave importance on the education of the time required for encoding. The experimental results showed the better compression rate and also reduced encoding time. In the 2014, in the paper "Medical Image Compression Using Ripplet Transform," the author Dhaarani, C. discussed the new compression called ripplet transform to represent the medical images so that the obtained compression ratio will be far better and error will be reduced as compared to the previous systems. The experimental results has shown that the SNR and compression ratio obtained are better than the existing ones. # III. # Existing Model There are many image compression techniques previously developed such as Discrete Cosine transform (DCT), Discrete Wavelet transform (DWT) and Discrete Kekre transform (DKT) playing very important role in image compression process generally for medical images. In the image compression technique named DCT i.e. Discrete Cosine transform, the image is divided into different parts according to their rates and then all the parts are applied for quantization in order to compress the image parts. But in this technique the image parts which has spatial correlation are given more importance for compression while the other neighboring pixels are neglected. In comparison with the DCT the DWT i.e. Discrete Wavelet Transform gives better and higher compression ratio. DWT is a better process for compression in case of higher compression ratio but it is a very slow process. Here the input image is taken and filtered for obtaining sub band coding and compresses each code separately. An algorithm called as the EBCOT uses the same technique fir compressing images. Here the image is taken and divided into a number of sub bands which are distributed into many code blocks and each of these code blocks are applied for compression separately. Another algorithm named as QUAD Tree checks all the minimum and maximum pixels and performs compression. When all the techniques such as the DCT, DWT, DKT are used together that technique is called as the Hybrid Wavelet Transform i.e. HWT. Thus for better compression, both ratio and speed of compression needs to be obtained that too with a high quality of the image. So reconstruction of the compressed lossless image is easier than that of the compressed image that has suffered loss. So reconstruction of the compressed image will be studied in this paper. # Figure : Lossless Compression # IV. # Proposed Model Decompression leads to reconstruction of the compressed image into its original form or at least approximately to its original format. Ones the compressed image is transferred and it reaches the destination, it needs to be decompressed. Here, we are developing an Image Reconstruction model for compressed medical images without losing critical information for removing redundancy data from biomedical images or signals. In order to solve the problem of reconstruction of the compressed medical image, we have proposed two splitting algorithms to solve the problem. The problem of reconstruction is defined as follows: ?????? ?? ? ?? ?? ??(??) = ??(??) + ? ?? ?? ?? ??=1 (?? ?? ??) (1) where ?? the lost is function and ?? ?? are convex functions; ?? ?? are orthogonal matrices. The two algorithms used for solving this problem are: (3) Repetitive Loss-Thresholding algorithm (RLTA) and (?? ?? ??)independently. e) Thus finally solving to find the value of ??. First the original compressed image is taken and then divided into a large number of smaller images. # SOURCE IMAGE PREDICTOR ENTROPY ENCODER COMPRESSED DATA TABLE SPECIFICATION Then for each splitted smaller part, the reconstruction algorithms are applied separately. Ones the reconstruction of each smaller parts of the compressed image is done separately, they are combined to form the decompressed original image. The algorithms used for the reconstruction technique are discussed below: (1) RLTA Input: ??=1 ?? ?? ? where ?? ?? is the Lipschitz constant In mathematical analysis, Lipschitz continuity, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a definite real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; this bound is called the function's "Lipschitz constant". Repeat For ?? = 1 to ?? do ?? ?? = ?????????????? ?? (??)(?? ?? ?1 ? ??????(?? ???1 )) (4) Thus the compressed image which is considered as the problem of our issue is taken and splitting of the images is done resulting in minimization of the problem. # End for Until Stop The ?? ?? = ?????????????? ?? (??)(?? ?? ? ?????(?? ?? ))(6)?? ?? +1 = 1+?1+4(?? ?? ) 2 2(7)?? ?? +1 = ?? ?? + ?? ?? ?1 ?? ?? +1 (?? ?? ? ?? ???1 )(8) End for Until Stop MRLTA Algorithm results in reduction of the problem function, and thus the problem function becomes ??(?? ?? ) ? ??(?? * ) ? 2???? ? ?? 0 ??? * ? 2 (??+1) 2 ,??? * ? ?? *(9) Thus MRLTA mostly depends on the reduction of the variable ?? ?? . Ones both the reconstruction algorithms are applied to the splitted smaller images, the compression problem of the splitted smaller images are reconstructed .These reconstructed smaller images are then added up to form the original decompressed image. V. # Experimental Result We have implemented the paper practically using matlab 2013b. We have done the experiment taking different MRI images as the input. Both the reconstruction processes showed same output. The only difference was that the 2nd algorithm was faster and consumed less iteration time.The results for the work for different images are given as below: # Conclusion The reconstruction of a compressed image is very important which can be processed by the destination user. Here we had proposed a theoretical and computational investigation to compress an image and get back the original image. Dequantization technique had been implemented on decompressed image. In the first part we had obtained fairly good compression result. In this paper, we present two algorithms to answer complexity of reconstructing the compressed image to its original format. Initially the compressed image is treated as a big problem, which is given for splitting into smaller problems carried out by splitting algorithm RLTA and then each sub problem is averaged in different iterations to obtain the original image by the MRLTA. The proposed splitting algorithms are applied to the reconstruction of the compressed medical image and low-rank tensor completion and the experimental results have shown that it has performed better. Further on, the decompression algorithm can also be used to enlarge any grayscale image. And then decompression could be done for future course of action. 4![Modified RLTA(MRLTA) The modified version, (??????????) have complexity bounds ???1 ??? ? ? where ?? represents optimal solutions.Thus we are proposing the two algorithms both based on variable as well as splitting techniques. Firstly the Compressed big image is splitted into smaller ?? sub images by: a) Splitting the function ??(??) into ?? number of smaller sub functions ?? ?? (??). i.e. ?? ?? (??) = ??(??) ?? ? . b) Splitting ?? variable into ?? number of smaller variables, c) i.e.{?? ?? }, ?? = 1,2, ? . ??. d) Split each operators to reduce the ?? = ??(??)](image-2.png "( 4 )") 1![these algorithms are applied to reconstruction of the compressed image. Thus the two algorithms need to solve the problem given as: min {??(??) ? ??(??) + ??(??)} , a ? ?? ?? (2) Where for ??, ?? ?? ? ?? is a non-smooth function. And for ?? it is a smooth function. The proximal map of ??(??)is ?????????????? ?? (??)(??) â??" ?????? ?????? ?? {??(??) + ?? ? ??? 2 } (3) Global Journal of C omp uter S cience and T echnology Volume XV Issue II Version I Year ( )](image-3.png "Finally 1 2???") 6![Journal of C omp uter S cience and T echnology Volume XV Issue II Version I Year ( )](image-4.png "6 Global") 1![Figure 1 : Original heart image](image-5.png "Figure 1 :") 2![Figure 2 : Heart image after compression](image-6.png "Figure 2 :") 3![Figure 3 : Heart image during reconstruction process](image-7.png "Figure 3 :") 4![Figure 4 : Heart image after reconstruction](image-8.png "Figure 4 :") 516![Figure 5 : Heart image after reconstruction process](image-9.png "Figure 5 : 1 8Figure 6 :") 7![Figure 7 : MR chest image after compression](image-10.png "Figure 7 :") 8![Figure 8 : MR chest image during reconstruction process](image-11.png "Figure 8 :") 9![Figure 9 : MR chest image after reconstruction RLTA Iter_time = 2.25sec, Iteration Num = 50 k = 12152, rec_err = 1.073534e-01, samp.ratio = 0.251074, func.value = 58621.871883 MRLTA Iter_time = 1.92sec, Iteration Num = 50 k = 12152, rec_err = 1.024063e-01, samp.ratio = 0.251074, func. value = 48455.171021 ans =PSNR of the Reconstructed Image = 53.1627 VI.](image-12.png "Figure 9 :") © 2015 Global Journals Inc. (US) © 2015 Global Journals Inc. (US) 1 * and Sparse MRI: the application of compressed sensing for MR imaging DD MLusting JPauly Magnetic Resonance in Medicine 2007 * Chakra borty, An efficient algorithm for compressed MR imaging using total variation and wavelets SMa WYin YZhang A Proceedings of CVPR CVPR 2008 * A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data JYang YZhang WYin IEEE J. Sel. Top. Signal Process 4 2 2010 * Applications of the method of multipliers to variation inequalities, in: Augmented Lagrange Methods: Applications to the Solution of Boundary-Valued Problems DBay 1983 North Holland, Amsterdam * Signal recovery by proximal forward-backward splitting PLCombettes VRWajs SIAM J. Multiscale Model. Simul 19 2008 * Hierarchical interpretation of fractal image coding and its applications to fast decoding ZBaharav DMalah EKarnin W Intl. Conf. on Digital Signal Processing 1993 * A web-based DICOM-format image archive, medical image compression and DICOM viewer system for teleradiology application PSuapang KDejhan SYimmun SICE Annual Conference 2010, Proceedings of Aug. 2010 3011 * Context based medical image compression with application to ultrasound images MAAnsari RSAnand 10.1109/INDCON.2008.4768796 India Conference 2008. 2008. Dec. 2008 1 * Lossy medical image compression using Huffman coding and singular value decomposition AMRufai GAnbarjafari HDemirel Signal Processing and Communications Applications Conference (SIU) 2013 21st. April 2013 4 * Comparison of fractal coding methods for medical image compression SBhavani KGThanushkodi Image Processing 7 7 686 October 2013 IET * Medical Image Compression Using Ripplet Transform CDhaarani DVenugopal ASRaja Intelligent Computing Applications (ICICA) 2014 * International Conference on 6-7 March 2014 238 233 * Lossless Medical Image Compression in a Block-Based Storage System SChandra WWHsu Data Compression Conference (DCC) 2014. March 2014 400 * Effective lossless compression for medical image sequences using composite algorithm MFUkrit GRSuresh Circuits, Power and Computing Technologies (ICCPCT) 2013 * International Conference on 1126, 20-21 March 2013 1122 * A Secure Low Complexity Approach for Compression and Transmission of 3-D Medical Images RPizzolante BCarpentieri ACastiglione Broadband and Wireless Computing, Communication and Applications (BWCCA) 2013 * Eighth International Conference on 28-30 Oct. 2013 392 387 * A secure fast 2D -Discrete fractional fourier transform based medical image compression using hybrid encoding technique PVKumari KThanushkodi Current Trends in Engineering and Technology (ICCTET), 2013 International Conference on July 2013 7 * Performance Analysis of Region of Interest Based Compression Method for Medical Images RShah PSharma RShah Advanced Computing & Communication Technologies (ACCT) 2014 * Fourth International Conference on 8-9 Feb. 2014 58 53 * Realization of Balanced Contrast Limited Adaptive Histogram Equalization (B-CLAHE) for Adaptive Dynamic Range Compression of real time medical images RKhan MTalha ASKhattak MQasim Applied Sciences and Technology (IBCAST), 2013 10th International Bhurban Conference on Jan. 2013 121 * A very low-complexity multi-resolution prediction-based wavelet transform method for medical image compression NNagaraj TENCON 2003. Conference on Convergent Technologies for the Asia-Pacific Region Oct. 2003 2 * A web-based DICOM-format image archive, medical image compression and DICOM viewer system for teleradiology application PSuapang KDejhan SYimmun SICE Annual Conference 2010 * Context based medical image compression with application to ultrasound images MAAnsari RSAnand India Conference 2008. 2008. Dec. 2008 1 * Symmetric convolution and the discrete sine and cosinetransform SMartucea IEEE