# Introduction avelet-based image coding as SPIHT offered by Said and Pearlman in 1996 is widely used in the field of image compression than other methods because of its large firmness performance and many other features [1]. These calculations have embedded coding enabling simple bit rate control with progressive transmission of information to get a wavelet transformed image. Also it's a completely embedded codec, offers great image quality, high PSNR, optimized for progressive picture transmission, efficient combination with error protection, variety information on demand and therefore element powerful error correction diminishes from beginning to end. It's another advantage that we can download just little element of record with a lot more usable outcomes and create very streamlined output bit stream with large bit variation and no additional entropy code required. In addition, it has skill of modern image transmission [2]. SPIHT though having many advantages over other firmness methods still requires many modifications because it has many disadvantages as one bit error introduce important image aberration depending upon its location. It's bit synchronization house as loss in a single bit can result in finish misinterpretation from decoder aspect. It implicitly finds job of considerable coefficients therefore hard to perform operation on compressed information that's completely dependent upon position of significant change values. So various improvements to SPIHT is performed in prior years as in relation to storage requirement, redundancy, quality, error resilience, complexity, and pace and compression ratio. This document presents various developments to SPIHT in previous years from way back its progression in 1996 in terms of above factors. The business of the papers is as follows: Part II covers the original SPIHT algorithm. In area III, various modifications done to SPIHT are introduced and also the document is concluded with section IV. # II. Principles Behind Compression A feature of the majority of images is the fact that the nearby pixels are related and consequently feature redundant information. The foremost job then is to locate less linked representation of the Image. Two fundamental parts of compression are irrelevancy and redundancy reduction. Redundancy reduction aims at eliminating duplication in the signal source (image/video).Irrelevancy reduction omits sections of the signal that won't be seen from the signal receiver namely Human Visual System (HVS).In general three sorts of redundancy can be identified ? Spatial Redundancy or connection between neighboring pixel. ? Spectral redundancy or connection between dissimilar color planes or spectral bands. ? Temporal redundancy or connection between adjacent frames in a succession of images (in video applications). Image compression research aims at lessening the amount of pieces needed to represent an image by removing the spatial and spectral redundancies around really possible. In numerous areas, digitized images are changing normal analogue images as photograph or xrays. The quantity of data required to describe such pictures greatly slow transmitting and makes storage excessively expensive. The info contained in images must, consequently, be compressed by extracting only obvious elements, which are subsequently secured. The quantity of data involved is so lowered well. The fundamental goal of image compression would lower the bit-rate for transmitting or storage while preserving a suitable fidelity or image quality. ( D D D D D D D D ) Some of the most successful applications of wavelet techniques is transform based image compression (also called coding). Whilst the personality of the wavelet decomposition results in superior energy compaction and perceptual quality of the decompressed image, the overlapping feature of the wavelet change relieves blocking items. What's More, the multi resolution transform domain means that wavelet compression methods weaken a lot more beautifully than block DCT methods since the compression percentage increases. Big smooth areas of an image may be displayed with hardly any bits), because a wavelet basis consists of functions with both brief support (for high frequencies) and extended support (for low-frequencies, and depth added where it is desired [27]. Wavelet based coding [27] provides substantial developments in image quality at higher compression ratios. In the last several years, a variety of strong and innovative wavelet-based techniques for image compression, as discussed later, were created and implemented. On account of the numerous advantages, wavelet-based compression algorithms are the suitable applicants for the new JPEG 2000 standard [34]. The loss of information is launched by the quantization stage which purposefully denies less important parts of the image information. Because of their exceptional energy compaction properties and messages together with the human visual system, wavelet compression techniques have created superior objective and subjective outcomes [4]. With wavelets, a compaction speed as high as 1:300 is achievable [22]. Wavelet compression allows the integration of numerous compression techniques into one formula. With loss-less compression, the initial image is restored exactly after decompression. Unfortunately, with pictures of natural scenes, it is seldom feasible to obtain error free compression at a rate beyond 2:1 [22]. Much higher compression ratios could be obtained if some problem, which is normally hard to see, is enabled between the image and also the first image. This is lossy compression. In many circumstances, it isn't needed as well as desired there be error free reproduction of the first image. Such a case, the modest number of problem introduced by lossy compression may be acceptable. Lossy compression can also be okay in rapid transmission of still images over the Web [22]. Over the past couple of years, various novel and sophisticated wavelet-based image coding schemes are created. These include Embedded Zero tree Wavelet (EZW) [13], Set-Partitioning in Hierarchical Trees (SPIHT) [1], Set Partitioned Embedded bloCK coder (SPECK) [ The SPIHT coder [1], [2] is a very processed variant of the EZW algorithm and it is a strong image compression algorithm that generates an embedded bit stream from which the best reconstructed pictures in the mean square error sense can be extracted at different bit rates. Some of the greatest results--maximum PSNR values for specified compression ratios --for a broad variety of images have been acquired with SPIHT. Hence, it's become the benchmark state -of -the art algorithm for image compression [22]. # Set partitioning sorting algorithm One of many principal characteristics of the SPIHT algorithm is that the ordering information is not clearly transmitted. Instead, it is based in the reality the execution path of any criteria is explained by the results of the comparisons of its own branching factors. So, if the encoder and decoder have the same working criteria, then the decoder can replicate the encoder's performance route when it receives the outcome of the size comparisons, as well as the order information could be restored from the execution path. One important reality in the design of the working algorithm is that there is no requirement type all coefficients. Actually, an algorithm which merely selects 2 max { } 2 m n n i j i j T c ? ? ? If the decoder receives a "no" as that response, that is the subset is insignificant, then it recognized that all coefficients in m T are insignificant. If the response is "yes", that is the subset is significant, then a confident rule shared by the decoder and encoder is used to separation m T into new subsets and the importance test is then applied to the new subsets. This set distribution process continues awaiting the magnitude test is done to all single organize significant subsets in order to identify each important coefficient. To lower the amount of size comparisons, a setpartitioning rule which utilizes an expected ordering within the structure defined by the sub-band pyramid, is used. The object is to make new partitions such that sub-sets anticipated to be insignificant contain a significant number of components, and element is contained only one by subsets expected to be significant. The relationship between size comparisons and information bits is provided by the significance function , ( , ) ( ) {1, max { } 2 0,otherwise m n n i j i j T S T c ? = ? # Spatial orientation trees Ty pically, all of the image's energy is focused in the lower frequency components. Consequently, the variance decreases as you move from the highest to the lowest of the subband pyramid. There's a spatial self similarity between sub bands, and also the coefficients are anticipated to get better magnitudeordered as you move downward in the pyramid following same spatial orientation. A tree structure, called spatial orien tation tree, naturally identifies the spatial connection to the hierarchical pyramid. Exhibits how a spatial orientation tree is defined in a chart built with recursive four-band splitting. Each node of the tree refers to some pixel, and it is identified by the pixel coordinate. Its direct descendants (offspring) match the pixels of the same spatial positioning in the next better degree of the pyramid. The tree is defined in such a manner that each node has either no offspring or four offspring's, which always form a group of 2X2 adjoining pixels. The pixels in the maximum degree of the chart are the shrub roots and are additionally gathered in 2X2 adjacent pixels. But, their offspring branching is distinct, and in each group one (suggested by the star in With these specific criteria the speed might be precisely controlled as the inherited info is formed of single parts. The encoder may halt at a desired distortion worth and estimate the progressive distortion reduction. In the algorithm all branching circumstances based on the importance data n S , which can only be intended with the knowledge of , i j c are output by the encoder. Thus, to obtain the desirable decoder's criteria, which duplicates the encoder's performance path because it types the important coefficients, the words output by input within the formula must be replaced. If the coordinates of the significant coefficients are added to the end-of the LSP, in other words, the coefficients directed by the coordinates in the LSP are categorized the ordering info is recovered. But whenever the decoder inputs information, its three control databases (LIS, Top, and LSP) are just like those employed by the encoder at the moment it outputs that information, which means that the decoder indeed recovers the order in the execution path. It isn't difficult to see that with this plan decoding and code have the same computational complexity. An added task achieved by decoder is really to update the reconstructed image. For the worth of n each time a co-ordinate is transferred to the LSP, it is known What makes SPIHT really excellent is that it yields all those qualities concurrently. Table 2 gives the Compression ratio and PSNR results for SPIHT algorithm. It may be observed the compression ratio increase, once the levels of decomposition are increased. This is because, when the degree of decomposition is increased, coefficients with higher magnitude focus largely in the main levels. Also the majority of the coefficients may have reduced magnitudes. These coefficients require just less number of parts to be transmitted. Hence the compression ratio may improve when decomposition level is raised. However, the resolution of the reconstructed image will reduce for higher decomposition levels. The perceptual image quality, however, isn't guaranteed to be optimal, as seen from Fig. 2, because the developer isn't made to expressly consider the human visual system (HVS) characteristics. Substantial HVS research has shown there are three perceptually important task regions within an image: smooth, border, and textured or detailed regions [20]. By integrating the susceptibility of the HVS to these areas in image compression techniques including SPIHT, the perceptual quality of the pictures might be improved at all bit rates. Efficiency of the algorithm can be enhanced by entropy-coding its output, but at the expenditure of a superior coding/decoding time. On the other hand, the consequence values are not equally probable, and nearby a statistical dependence connecting ( , ) n S i j and ( ( , )) n S D i j and also connecting the significance of adjacent pixels. # Lena # Spiht in Image Compression SPIHT is a very powerful image compression technique released in 1996. SPIHT is an entirely embedded wavelet coding algorithm that gradually refines the most significant coefficients in organize of decreasing energy levels. It is an advanced version of Embedded Zero Tree Wavelet (EZW) developer predicated on building of coefficient trees and effective approximations that can be put into place as bit plane running. Due to its consecutive-approximation character, it really is SNR scalable, even though at the expense of sacrificing spatial scalability. SPIHT includes two concepts: transferring the most significant bits first and buying the coefficients by magnitude. LIS, LSP [1]. LIP is listing of minor pixels which retailers these pixels which are insignificant in comparison to certain tolerance. LIS is listing of insignificant sets having those sets who's each pixel is below some specific limit. LSP is list of significant pixels containing those pixels that are significant in comparison with specific limit. If it's worth is greater than or equal to specific limit a pixel is substantial. Another type of SPIHT, no list SPIHT was introduced suitable for fast and easy components implementation. Rather than lists, a state desk with nibble per coefficient keeps track of data and set partitions protected. NLS sparsely mark chosen descendent nodes of minor trees such a manner that big groups of predictably insignificant pixels are easily identified and skipped during coding process. The image data is saved in an one dimensional recursive zig zag assortment for algorithmic simplicity and computational performance. Functionality of NLS is virtually identical to SPIHT. The initial SPIHT launched has several drawbacks and thus numerous are improvements are complete inside. V. # Contemporary Affirmation of Recent Modifications in Spiht SPIHT method though having many advantages because it's a completely embedded codec, offers great image quality, large PSNR, optimized for modern picture transmission, effective conjunction with error safety, variety info on demand and hence requirement of strong error correction diminishes from starting to finish. It has another edge that obtain just small section of record can be saved with considerably more usable results and create very streamlined result bit stream with big bit variation and no added entropy code required. In addition, it has skill of progressive image transmission. But still it has many disadvantages as significant image distortion is introduced by a single bit error depending upon its location. It's bit synchronization house as leak in bit can result in complete misinterpretation from decoder aspect. It implicitly situates position of significant coefficients so hard to do operation on compressed data which is wholly dependent upon position of significant transform values. Therefore various developments to SPIHT is performed in prior years as in relation to quality, redundancy, speed, error resilience, complexity, storage requirement, and compression ratio. Advancement in rate was made for multispectral images by an formula released by Minghe, Cuixiang [3]. This algorithm comprises a whole lot of unnecessary search, greatly reducing encoding rate in addition to decreasing the demand of period and area. As a way to solve the problem of velocity, the writers released a fast lookup algorithm that reads the wavelet coefficient matrix merely once to determine the significance of all D(i,j) and L(i,j) needed during the performance of SPIHT. Another formula, Block-Established Pass-Parallel SPIHT [4] was introduced that decomposes a wavelet transformed image into 4 * 4 blocks and simultaneously encodes all the pieces in a bit plane of a 4 * 4 prevent. The pre calculation of the flow length of every move enables the concurrent and pipelined delivery of these three passes by a decoder but in addition not only an encoder. The change of the processing order somewhat degrades the compression performance and hence PSNR decreases but increases speed. With regard to decreasing redundancy, An Embedded image compression using differential code and marketing [5] is suggested. For reducing the redundancy one of the coefficients during coding within the wavelet-area, differential system is proposed. In the standard quantization of wavelet coefficients, the subband depending on the framework of modeling the chance of the statistical characteristics based on feature of the coefficients' submission in sub-band, working pass is altered and differential process is optimized, to be able to minimize the redundancy coding in each subband. The image code outcome, calculated by certain threshold, demonstrate that through differential marketing, the speed of compression get greater, and quality of rebuilt image have also been raised substantially. Primarily, a brand new kind of shrub with digital root was introduced to hold more wavelet coefficients. Secondly, an extra matrix was used to hasten the thinking of the meaning of trees. Third, a pre-processing is done to smooth the coefficients before SPIHT coding. Fourth, some expected bits are disregarded from your encoder result by rearranging the code procedure. Finally, the quantization is beginning from the middlepoint in line with the figures. Experiments demonstrate these developments raise PSNR by up to 5 dB at very low rates, along with the typical improvement at 0.2one bpp is about 0.5 dB for the standard test images used. Computation complexity is also, decreased. Yet another innovative scheme to enhance the robustness of SPIHT based color picture coder for transmitting over noisy channels is suggested [7]. Within this structure, the SPIHT bit streams are re-arranged according to their spatial (square block) representation without loss in coding efficiency. A group of blocks, called slice are carried independently. The first erroneous block within the piece is detected by examining error checking problems while decoding the image. Till a viable solution is found whether any transmission problem is detected, a collection of bit skipping and recurrent decoding procedure is performed in the part of the bit-stream. The simulation results demonstrate that vital quality improvement is reached through this system. Another effort to improve the grade of picture is put by Tung, Chen [8] to enhance the progressive image transmission (PIT) of SPIHT. The unique strategy about the progressive image transmission is that SPIHT regards the bit channels acquired from every truncation as a transmission in every stage; in every truncation, SPIHT not only improve the substantial coefficients from this truncation but ( D D D D D D D D ) Year phases, the refined and redefined bit streams will never be sent immediately and will soon be replaced by the bit streams based on the significant coefficients of the truncation. These purified bit streams will probably be sent late. According to the results, this approach gets the better image-quality in every PIT phase than the original SPIHT. An effective source and channel programming for progressive image broadcast over noisy channels [9] is proposed. It was shown that with a small number of additional redundancies, we are able to incorporate error-detection into arithmetic coding. This system is utilized to provide error detection and also to improve the channel decoder functionality by means of a combined source channel decoding structure. Moreover, au Unequal Problem Protection (UEP) process which employs error detection coding only into a bit, influenced by error distribution is introduced. Yet another algorithm for error resilience commonly referred as Color SPIHT (CSPIHT) is created [10] to encode and quantize wavelet coefficients and have exceptional speed distortion feature within the sound free environments. Yet, in presence of noise it's very delicate towards the bit problems. It was observed that parts have different amount of vulnerability of mistakes. The error in a number of the bits (critical pieces) causes the serious degradation while mistakes in other bits (noncritical bits) have minimal effect within the reconstructed images. Within this document, unequal error protection (UEP) scheme is proposed, in which within each bit plane, the bits are re-organized according to their susceptibility of errors and then critical bits are protected asymmetrically using RCPC codes. The defense is decreased for higher bit-planes, by changing the puncturing rate. The simulation outcomes demonstrate a marked improvement of 5-15 dB within the characteristic of duplicated images, in comparison with the identical error protection (EEP) and unguarded CSPIHT bitstream. Lately yet another technique for error resilience is proposed by Xin and Pearlman Li [11], in which a novel data representation known as the progressive importance map for errorresilient SPC (significant map coding) is proposed. It buildings the value guide (sig-map) into two components: a high level summation sig-map and also a low-level supporting sig-map (comp-sig-map). This kind of structured representation of the sig-map enables us to enhance its error-resilient property at the cost of only a minimal compromise in compression effectiveness. Simulation results have revealed the progsig-map may reach highly-competitive rate distortion efficiency for binary symmetric routes while maintaining low computational complexity. A variation of SPIHT, called no list SPIHT (NLS) [12] which functions without linked lists and has predetermined memory requirements is proposed for reducing the complexity. Here instead of lists a state table with nibble per coefficient keeps track of encoded information and set partitions. Image information is stored in one dimensional recursive zig zag variety for computational effectiveness and ease. Yet another algorithm suggested by Oliver, Malumbres [13] changes sorting process of wavelet coefficients, replacement for the first string table construction with a single dimensional array, alter significant judgment foundation of wavelet coefficients. The outcomes of studies suggest storage space was preserved too and the intricacy of the criteria was reduced. Yet another formula is suggested for wavelet based image compression by using zero tree theory in the wavelet decomposed image [14]. The algorithm has a large edge over previously developed wavelet based image compression algorithms as it utilizes intra and inter band correlation simultaneously. Besides the advancement in code performance, the algorithm also uses significantly lower storage for calculations and coding thus reducing the complexity of the formula. The striking feature is move independent coding that makes it appropriate for use to error protection schemes and makes it less susceptible to data reduction because of noisy communication channel. The formula codes each of the color bands independently thereby enabling differential coding for the color information. A document suggested by Zhang, and Hu [15] deals with the implementation of SPIHT algorithm using DSP processor. As a way to ease the execution and improve the codec's operation, some relative issues are discussed, such as the optimization of application construction to hasten the wavelet decomposition. SPIHT's large memory requirement is actually a major drawback for hardware implementation so in this paper the original SPIHT algorithm is altered by presenting two new concepts amount of problem pieces and absolute zero-tree. As A Result, the memory cost is dramatically reduced. A fresh technique is introduced to handle the coding process by number of error bits. Experimental results reveal the implementation meets frequent demand of real -time video coding and is demonstrated to be a useful and efficient DSP solution. When it comes to storage requirement, a listless block sapling established partitioning algorithm was proposed [16] when a listless implementation of wavelet based block tree code (WBTC) algorithm of changing root block sizes is implemented. WBTC criteria enhances the picture compression efficiency of SPIHT at lower rates by efficiently coding both inter and intra scale correlation using block trees. It makes good use of three bought auxiliary databases, although WBTC decreases the storage requirement by using prevent F trees compared to SPIHT. This characteristic makes WBTC unwanted for hardware implementation; since it requires lots of storage management when the list nodes grow exponentially on every pass. The projected listless implementation of WBTC formula uses specific markers rather than lists. The proposed algorithm is joined with DCT and discrete wavelet transform (DWT) to show its superiority over DCT and DWT based set programmers, including JPEG 2000 at lower prices. The efficiency on most of the standard test images is practically identical to WBTC, and outperforms SPIHT by a wide margin particularly at lower bit rates. In medical imaging Wang [17] suggested an algorithm to realize high compression ration and therefore high PSNR. First, within the larger bit-plane, this formula only quantizes the wavelet coefficients within the bottom frequency sub band. Then it quantizes other ones by standard scalar. Test results demonstrate the proposed scheme improves the performance of wavelet image coders. In particular, it'll improve coding gain within the low bit rate image coding. Another algorithm suggested by Zhu and Lawson [18], introduced two techniques. One would be to make use of a new type of tree, to hold as several wavelet coefficients as possible during initialization. The other is to omit the predictable code symbols for the importance indicator of the coefficient sets or individual coefficients. While the second favors fairly large bit rate image code, the first development raises the compression proportion of low bit rate image code. The enactment of the SPIHT is re designed to include both improvements. The computation complexity is not increased on utilizing these developments. Simulation results reveal substantial performance increase of the Developments. The SPIHT algorithm has drawn great attention lately as a method for picture coding. Not only does this offer objective and subjective performance, it's also simple and efficient. An enhanced lossless image compression which relies on SPIHT is launched [19]. The most important modification within this algorithm is the addition of a straightforward modification to the group of type a using a new evaluation to the brink. The tests show that this improvement raises the operation of lossless image coding for a great many standard test images. Yet another algorithm for high-performance programs like medical and satellite imaging is offered by [20] in which a brand new lossless hybrid algorithm based on simple selective scan order with Bit Plane Slicing technique is offered for lossless Image compression of limited bits/pixel images, for example medical images, satellite images and additional still images common on the planet. Effective coding is reached by modified Huffman code and run length. This strategy is combined with efficient selective scan order for entire picture in one pass-through. The brand new hybrid algorithm achieves good compression speed, compared to the present techniques of coding with various test images. # VI. # Conclusion This document presents different changes done to original SPIHT introduced by Said and Pearlman in 1996. It's observed that the SPIHT algorithm is an extremely efficient and widely employed technique since it offers many advantages as it's a really simple and fully embedded codec with progressive image transmission and powerful error correction methods. Additionally it may be coupled with DCT and DWT for higher compression efficiency. Although having several advantages, it still needs lots of developments to be performed in it to be able to increase speed which lesser execution period, redundancy, high quality(high peak signal-to noise ratio), error resilience, lesser complexity, decrease in storage requirement and high compression ratio. SPIHT may even be applied for lossless image compression for greater image quality I.e large PSNR without much drop-off in compression ratio. Therefore numerous enhancements to SPIHT are done based on the requirement. Major improvements are finished lately in the areas of error resilience speed and memory necessity. ![Fig) doesn't have any descendants. Parts of the spatial orientation trees serve since the partitioning subsets in the searching.](image-2.png "") ![the decoder uses that in sequence, plus the sign bit that is input just after the addition in the LSP, to set , 1.5* 2 n i j c = ± . Similarly, through the refinement pass the decoder adds or subtracts 1 2 n? to , i j c when it inputs the bits of the binary illustration of , i j c . In this manner the distortion progressively decreases through both the sorting and refinement passes.](image-3.png "") 3![Features of SPIHT © 2013 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XIII Issue IX Version I The SPIHT method is not a easy extension of traditional methods for image compression, and signifys Contemporary Affirmation of SPIHT improvements in Image Coding](image-4.png "3 .") 1Year 2Table 1: Compression ratio & PSNR Results usingSPIHT for Lena 256 x 256LevelBitplansCrPsnrDiscarded256x256336.57 31.28256x2563513.03 26.81256x256439.44 29.00256x2564528.38 25.87256x2565310.38 26.76256x2565538.94 24.66 3 Contemporary Affirmation of SPIHT improvements in Image Coding Fadditionally re-refine the significant coefficients extracted from the previous phases. The approach introduced within this document is that, in certain transmissionContemporary Affirmation of SPIHT improvements in Image Coding * A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical trees WSaid Pearlman IEEE Trans. Circuits Syst. Video technology 6 3 Jun. 1996 * Algorithm collections for digital signal processing using MATLAB ESGopi * Application Of Improved SPIHT For Multispectral Image Compression HMinghe ZCuixiang 5 * Int. Conf. On Comp. Sci. & Edu Aug. 24-27, 2010 * A Block-Based Pass-Parallel SPIHT Algorithm YJin HLee IEEE Trans. Circuits Syst. Video Tech 22 7 Jul. 2012 * Embedded Image Compression Using Differential Coding And Optimization Method LZhu YYang 7 th Int. Conf. Wireless Comm. Net. And Mobile Computing 2011 * Error Detection And Correction Of Transmission Errors In SPIHT Coded Images EKhan MGhanbari Signal Processing And Comm. Tech. pp 2009 * A New Improvement Of SPIHT Progressive Image Transmission CTung TChen WWang SYeh IEEE 5 th Int. Symposium Multimedia Software Eng 2003 * Improvements To SPIHT For lossy image Coding JZhu SLawson School of Eng., Uni. of Warwick 24 2001 * Error Resilient Technique For SPIHT Coded Color Images MKhan EKhan Signal Proc. And Comm. Tech 2009 * Progressive Significance Map and Its Application to Error-Resilient Image Transmission YHu WAPearlman XLi IEEE Trans. On Image Processing 21 Jul. 2012 * SPIHT Image Compression Without Lists FWWheeler WPearlman Proc. IEEE ICASSP IEEE ICASSP Jun. 2000 4 * Fast And Efficient Spatial Scalable Image Compression Using Wavelet Lower Trees JOliver MPMalumbres Generalitat Valenciana research CTIDIB 19 2002 * Block Tree Partitioning For Wavelet Based Color Image Compression PSingh MN SSwamy RAgarwal Proc. IEEE ICASSP IEEE ICASSP 2006 6 * Real Time Implementation Of A New Low Memory SPIHT Image Coding Algorithm Using DSP Chip YSun HZhang GHu IEEE Tran. On Image Processing 11 9 Sep. 2002 * Listless Block-Tree Set Partitioning Algorithm For very Low Bit Rate Embedded Image Compression RKSenapati UCPati Int. Journal Elect. And Comm May 2012 * Embedded medical Image Coding Using Quantization Improvement Of SPIHT WWang GWang TZhang GZeng Proc. IEEE IEEE 2009 * Improvements To SPIHT For lossy image Coding JZhu SLawson Eng., Uni. of Warwick 2001 24 * Improvements To SPIHT For Lossless Image Coding TBrahimi Amelit Proc. IEEE IEEE 2006 * Hybrid Algorithm For Lossless Image Compression Using Simple Selective Scan Order With Bit Plane Slicing PPandiam SSivanandam Journal of Comp. Sci 8 Aug. 2012