# INTRODUCTION he fact that Earth is an aquatic plant with more than 80% of the surface covered with water, has attracted many earth observers to understand what is lying below water using sonar techniques. SONAR (SOnar NAvigation and Ranging), initially used in submarines during World War II, is increasing being used in Earth observations along with various civilian applications, sea-bed imaging, depth sounding and fish-echolocation. Sonar information collected while searching for, or identifying, underwater surfaces is often presented to the operator in the form of a two dimensional images. Sonar images are created using a fan-shaped sonar beam that scans a given area by moving through the water to generate points, which Author ? : Associate Professor & Head, Department of Computer Science & Applications, Jawahar Science College, Neyveli, Tamilnadu. Ph: +91 9367645666, E-mail : alaneyveli@gmail.com Author ? : Reader, Postgraduate and Research department of Computer Science, Dwaraka Doss Goverdhan Doss Vaishnav College, Chennai, Tamilnadu. Ph: +91 9444063888, -E mail : santhosh2001@sify.com forms high resolution sonar image of the given area. The Sonar images thus acquired are often disturbed by various factors like the transmission of limited range of light, disturbance of lightening, low contrast and blurring of image, color diminishing during capturing and noise. These disturbances affect image quality which often lead to incorrect analysis and has to be handled carefully. Sonar image quality can be assessed in terms of quality parameters like contrast, distortion, blur and noise. These parameters can be altered by various factors like, lighting, movement of beam and sensitivity of the imaging devices, all of which can produce images that are difficult to interpret. Sonar Image enhancement techniques are used to enhance these quality parameters and can be achieved through the use of techniques like histogram equalization, image smoothening, image sharpening, contrast adjustment, edge or boundary enhancement and denoising. Out of these, image denoising has become a mandatory task before many processes like segmentation, feature extraction and target classification. Sonar images are often degraded by a special kind of noise called 'Speckle Noise'. Speckle is a random, deterministic, interference pattern in an image formed with coherent radiation of a medium containing many sub-resolution scatterings. Speckle noise removal can be performed either during data acquisition stage (multi-look process) or after data is stored (spatial filtering). In both cases, the main aim is to reduce/remove speckle noise while preserving both significant image and edge features along with spatial resolution. To achieve this goal, several solutions have been suggested (Pardo, 2011;Guo et al., 2011). Examples include the usage of traditional filters like Frost, SRAD (Speckle Reducing Anisotropic Diffusion), wavelets and Non-local Means techniques. Out of these the usage of wavelets is widespread (Kaur. and Singh, 2010;Delakis et al., 2007) and is considered in this paper. Traditional wavelet-based algorithms exploiting parametric models initially perform wavelet decomposition on the noisy image. A Bayesian estimator developed using a suitable probability density function (pdf) is then used to estimate noise-free wavelet coefficients. These estimated values are used as a prior alpha-stable model. The major concern with these models is that the efficiency depends on the correct estimation of the prior pdf used for modeling the wavelet coefficients. To solve the problem of estimation, Tian and Chen (2011) proposed a maximum a posteriori (MAP) estimation-based image despeckling algorithm which incorporates a non-parametric statistical model into a Bayesian inference framework. This model, referred as Tian model formulates the marginal distribution of wavelet coefficients. The Tian model uses a two-level decomposition using a Daubechies's wavelet and a novel wavelet shrinkage method called Antshrink (Tian et al., 2010) that exploits the intra-scale dependency of the wavelet coefficients to estimate the signal variance only using the homogeneous local neighbouring coefficients. This is in contrast to conventional shrinkage approaches where all local neighboring coefficients are used. Furthermore, to determine the homogeneous local neighboring coefficients, an Ant Colony Optimization (ACO) technique is used which is also used to classify the wavelet coefficients and this advanced technique is termed as AntShrink algorithm. In this work, the Tian model is enhanced in three manners. First, the traditional wavelet transform is replaced by a more efficient wavelet transforms, Stationary wavelet (undecimated wavelet transform). Second, the AntShrink algorithm in Tian model uses intra-scale dependency of wavelet coefficients. This method is enhanced by a method that combines both intra-scale and inter-scale dependency of the wavelet coefficients. Finally, the shrinkage is applied only to the magnitude wavelet coefficients at non-edge points. For this, a simple classification algorithm based on the coefficient's statistical features is used. The rest of the paper is organized as below. Section 2 discusses the proposed despeckling algorithm. The efficiency of the proposed method is analyzed using various experiments and the results are compared with traditional despeckling algorithm and the Tian model. The experimental results are presented in Section 4. Section 5 concludes the work with future research direction. # II. # PROPOSED DENOISING MODELS One common idea is to perform a logarithmic transformation to convert the multiplicative speckle noise into an additive noise (Arsenault and April, 1976;Xie et al., 2002), followed by a wavelet decomposition on the input noisy image to pack the energy of the image into a few large coefficients, then modify the noisy wavelet coefficients using certain shrinkage functions. Finally, the denoised image is reconstructed by performing an inverse wavelet transform, followed by an exponential transformation. The proposed denoising algorithm consists of four steps as listed below. Step 1: Apply log transformation of the noisy image Step 2: Apply stationary wavelet to the log transformed image Step 3: Identify edge and non-edge coefficients Step 4: Identify homogenous neighbour of non-edge coefficients Step 5: Estimate each noise-free coefficient using hybrid intra-scale and inter-scale dependencies excluding LL subband coefficients Step 6: Perform inverse stationary wavelet transform Step 1: Log transformation of the noisy image Given an image in spatial domain, the noisy pixel gi is given using Equation ( 1), (1) where fi is the noise-free pixel, ei is the speckle noise and i is the pixel index. The multiplicative noise is converted to an additive one by applying the logtransformation on both sides of Equation ( 1) (2) where are log transformed version of g i , f i and e i respectively. Logarithmic transform shows the frequency content of an image. This transformation maps a narrow range of low gray level values in the input image into a wider range of the output level. The opposite is true of higher values of input level. This type of transformation is used to expand the values of dark pixels in an image while compressing the higher level values. The log function has the important characteristic that it compresses the dynamic range of images with large variation of pixel values. However, the histogram of this data is usually compact and uninformative. Log transformation is done in two steps. The first step requires the creation of a matrix to preserve the phase of the transform image. This will be used later to restore the phase of the transform coefficients. In the second step logarithm is taken on the modulus of the coefficients according to the following equation. ( )3 where ? is a shifting coefficient, usually set to 1. After log transformation, the stationary wavelet transformation is performed to obtain the wavelet coefficients on the noisy image. The wavelet coefficients of the log transformed image corrupted by the speckle noise is expressed as (4) g i = f i + ? i g i ' = f i ' + ? i ' g i ' , f i ', ? i ', ) | ) j , i ( X ln(| ) j , i ( X ?? + = y i = x i + n i where y i , x i and n i represent wavelet coefficients of g i ', f i ' and e i ' respectively. Step 2: Stationary wavelet transformation The wide usage of the traditional Discrete Wavelet Transform (DWT) is because of the various advantages it has for denoising images. Some of the merits offered like its multi-scale filtering property and sparse transformation, which while compressing the signal energy to a small number of wavelet coefficients also leaves the majority of the coefficients with values close to zero. However, it has a serious flaw. The DWT does not preserve translation invariance due to the subsampling performed. That is a transformed version of signal X is not the same as the original signal. To preserve the translation variance, this paper uses a Stationary Wavelet Transform (SWT) (Nason and Silverman, 1995). Introduced by Holdschneider et al. ( 1989), the SWT is similar to the Discrete Wavelet Transform (DWT) in that the high-pass and filters are applied to the input signal at each level. However, in the SWT, the output signal is never subsampled (not decimated). Instead, the filters are upsampled at each level. SWT handles translation-invariance by removing the downsamplers and upsamplers in the WT and upsampling the filter coefficients by a factor of 2(j-1) in the jth level of the algorithm (Shensa, 1992). The SWT is an inherently redundant scheme, as the output of each level of SWT contains the same number of samples as the input. Thus, for a decomposition of N levels there will be N redundant wavelet coefficients. This algorithm is more famously known as "algorithme à trous" in French (word trous means holes in English) and refers to inserting zeros in the filters. The block diagram of SWT along with the filters in each level (up-sampled versions of the previous) is shown in Figure 1. (http://en.wikipedia.org/wiki/Stationary_wavelet_transfor m). An overview of the different names with explanation is provided by Fowler (2005). The local statistics of the wavelet coefficients are used to classify a coefficient as edge or non-edge. This step is performed in order to maintain edge and line features of the image. It is a well-known fact that the coefficient of variation of edges is higher than that of smooth regions (Schulze, 1997). Using this, the coefficient of variation is calculated for the wavelet coefficients. The process of identifying edge and nonedge regions among wavelets begins by first dividing an image into 3 x 3 square windows. Four sub-images (Figure 2) are constructed for each square window for each subband that has edge details. As HL subband has vertical edge information, the subimage is created in vertical fashion, while since LH subband has horizontal edge information; the subimage is created in horizontal fashion. As the HH subband has edge details in 45 0 directions, two diagonal subwindows are used. The coefficient of variation is then calculated using Equation ( 5) (5) The edge regions are obtained based on the assumption that a coefficient is considered as an edge if its coefficient of variation measured over the sub-January 2012 © 2012 Global Journals Inc. (US) window is greater than the entire window. That is, Let CS and CSW is the coefficient of variation of a window and its subwindow and if the condition CSW > CW produces a true result, then the wavelet coefficient under consideration is taken as an edge coefficient, else it is taken as a non-edge coefficient. Step 4: Identify homogenous wavelet coefficients The main task of this step is to classify the wavelet coefficients to find the homogenous neighbours among the non-edge coefficients using Ant Colony Optimization (Figure 3). # Figure-3 : ACO-based Image Classification Step 5: Estimation of wavelet coefficient dependencies Although a wavelet transform decorrelates images in an efficient manner, there still exist strong dependencies between wavelet coefficients. Exploitation of such dependency information with proper statistical models could further improve the performance of coding and denoising algorithms. Statistical modeling techniques that consider the dependencies between wavelet coefficients can be grouped into three categories. The first group exploits interscale dependencies while the second exploits intrascale dependencies. All techniques that exploit both interscale and intra-scale dependencies fall into the third category. The AntShrink algorithm belongs to the first category and this paper enhance it by converting it the third category. While considering Inter-scale dependency, if at a given scale a coefficient is large, its correspondent at the next scale (having the same spatial coordinates) will be also large. The wavelet coefficients statistical models which exploit the dependence between coefficients give better results compared to the ones using an independent assumption (Crouse et al., 1988;Fan and Xia, 2001). That is, the estimation of coefficients in high frequency subbands based on those in lower-frequency subbands, in other words, inter-scale uses the dependency on edges. In simple terms, the correlations between the coefficients and their parents are portrayed by inter-scale dependencies. The dependencies between a coefficient and its siblings (neighborhood in the same subband) are given by the intra-scale dependencies. The various steps in the proposed hybrid algorithm is listed and explained below. ? Using the non-edge homogeneous wavelet coefficients neighbours, find parent-child relationship using inter-scale dependences of wavelet coefficients. # ? Estimate local noise variance and marginal noise variance and perform denoising Considering each cluster from ACO separately, construct a centered window and estimate the local noise variance using Equation ( 6). (6) In this equation, the coefficient of y i belongs to the HH band. c(y i ) is defined as the coefficients within a local square window and have the same category as that is centered at the coefficient y i . Next calculate marginal variance of noisy observations of y 1 and y 2 using Equation (7) for each wavelet coefficient. (7) where M is the size of the neighborhood N(k) and N(k) is defined as all coefficients within a square-shaped window that is centered at the k th coefficient as illustrated in Figure 4. Then, can be estimated using Equation 8, (8) where (.)+ is defined as in Equation ( 9), (9) Compute the MMSE estimation for each coefficient excluding those of the LL subband, by substituting the noise variance estimated through Equation ( 6) into the following Equation ( 10), (10) Step 6: Reconstruct image Perform the inverse wavelet transform, followed by an exponential transformation, to obtain the denoised image. # III. # EXPERIMENTAL RESULTS Several experiments were conducted to evaluate the proposed model. The performance metrics used are (i) Peak Signal to Noise Ratio (PSNR) and (ii) Denoising Time. PSNR is a quality measurement between the original and a denoised image. The higher the PSNR, the better the quality of the compressed, or reconstructed image. To compute PSNR, the block first calculates the Mean-Squared Error (MSE) and then the PSNR (Equation 11) (11) where, and M and N, m and n are number of rows and columns in the input and output image respectively. Denoising time denotes the time taken for the algorithm to perform the despeckling procedure. Further, the proposed method was compared with Lee, Frost, SRAD, conventional Wavelet and Tian Models. I. From the results, it is evident that the proposed enhanced method is an improved version of the existing systems and shows significant improvement to its base method (Tian Model). The high PSNR obtained by the proposed model indicates that it is the better choice for removing Speckle noise from sonar images and produces a despeckled image whose visual quality is very near to its original noise free image. According to Venkatesan et al. (2008), an improved denoising algorithm is recognized by a high PSNR or a lower MSE. In agreement with this, the results of the proposed systems with high PSNR prove that they are an improved version over existing methods. Similarly, according to the report of Schneier and Abdel-Mottaleb (1996), a PSNR value in the range 30-40 indicates that the resultant image is a very good match to the original image. In accordance with this report, the results of all the three the proposed hybrid algorithms produce PSNR values in the range 42-44dB proving that it is an enhanced version when compared with the conventional algorithms. Figure 6 shows the time taken by the proposed and conventional filters to perform the denoising operation. # CONCLUSION The Sonar images, a type of Synthetic Aperture Radar (SAR) images, are most frequently affected with speckle noise. Speckle noise is multiplicative in nature and reduces the image quality. An important feature of sonar images is that they contain almost homogenous and textured regions and the presence of edges is relatively rare. This paper proposed a non-parametric statistical model using hybrid intra-scale and inter-scale dependencies of wavelet coefficients for removing speckle noise from speckled sonar images. First, the multiplicative speckle noise that disturbs the SONAR images is transformed into an additive noise with the aid of a logarithm computation block, after which a January 2012 stationary wavelet is applied. The inter-scale and intrascale dependency of the wavelet coefficients are exploited during denoising. The experimental results prove that the proposed method is efficient in terms of reduction in speckle noise and speed. In future, other wavelet variants like complex wavelets, wavelet tree are to be explored. 1![Figure-1 : Block Diagram of SWT](image-2.png "Figure- 1 :") 23![Figure-2 : 3 x 3 Window and its sub-windows Formation](image-3.png "Figure- 2 : 3 x 3") 4![Figure-4 : Example of Neighbourhood N(k)](image-4.png "Figure- 4 :") ![Several images were used to test the proposed model. The results projected in this chapter uses the four test images shown in Figure 5(a).](image-5.png "") 55![Figure-5(a) : Test Images](image-6.png "Figure- 5 Figure 5") 6![Figure-6 : Despeckling time While considering the computational complexity, the proposed model is comparable to that of the Tian model and selected traditional despeckling algorithms. According to Müldner et al. (2005), PSNR and speed are the two most important performance factors of any denoising algorithm. From the results, it is evident that the proposed denoising algorithms is fast and produces visual quality improved images and therefore can be considered as an attractive option for several advanced applications in the field of sonar imaging. The visual comparison of the denoised image produced the proposed filters are shown in Figure 7.](image-7.png "Figure- 6 :") 7![Figure-7 : Despeckled Images](image-8.png "Figure- 7 :") January 201214© 2012 Global Journals Inc. (US) © 2012 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XII Issue II Version I January 2012© 2012 Global Journals Inc. (US) January 2012© 2012 Global Journals Inc. (US)This page is intentionally left blank * Properties of speckle integrated with a finite aperture and logarithmically transformed HArsenault GApril J. Opt. Soc. Am 66 1976 * Wavelet-based signal processing using hidden Markov models MSCrouse RDNowak RGBaraniuk IEEE Trans. Signal Processing 46 1998 * Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI) IDelakis OHammad RIKitney Physics in Medicine and Biology 52 13 3741 2007 * Image denoising using a local contextual hidden Markov model in the wavelet domain GFan XGXia IEEE Signal Processing Lett 8 2001 * The Redundant Discrete Wavelet Transform and Additive Noise IEEE JEFowler Signal Processing Letters 12 2005 * Speckle filtering of ultrasonic images using a modified non local-based algorithm YGuo YWang THou Biomedical Signal Processing and Control 6 2011 Elsevier * A real-time algorithm for signal analysis with the help of the wavelet transform MHolschneider RKronland-Martinet JMorlet PTchamitchian Wavelets, Time-Frequency Methods and Phase Space Springer-Verlag 1989 * Speckle noise reduction by using wavelets AKaur KSingh NCCI 2010 -National Conference on Computational Instrumentation CSIO Chandigarh, INDIA 2010 * Using XML compression for WWW communication TMüldner GLeighton JDiamond Proceedings of the IADIS WWW/Internet 2005 Conference the IADIS WWW/Internet 2005 Conference 2005. 2005 * The stationary wavelet transform and some statistical applications GPNason BWSilverman 1995 University of Bristol Tech. Rep. BS8 1Tw * Analysis of non-local image denoising methods APardo 10.1016/j.patrec.2011.06.022 Pattern Recognition Letter 2011 Elsevier, In Press * Exploiting the JPEG compression scheme for image retrieval MSchneier MAbdel-Mottaleb IEEE Trans. Pattern Anal. Mach. Intell 18 8 1996 * An edge enhancing nonlinear filter for reducing multiplicative noise MASchulze Proc. of SPIE of SPIE 1997 3026 * The Discrete Wavelet Transform: Wedding the A Trous and Mallat Algorithms MJShensa IEEE Transaction on Signal Processing 40 10 1992 * Image despeckling usng a non-parametric statistical model of wavelet coefficients JTian LChen Biomedical Signal Processing and Control 6 2011 Science Direct * AntShrink: Ant colony optimization for image shrinkage JTian WYu LMa Pattern Recognition Letters 31 2010 Science Direct * Secure Authentication Watermarking for Binary Images using Pattern Matching MVenkatesan PMeenakshidevi KDuraiswamy KThyagarajah IJCSNS International Journal of Computer Science and Network Security 8 2 2008 * Statistical properties of logarithmically transformed speckle HXie LPierce FUlaby IEEE Trans. Geosci. Remote Sens 40 2002