groups. It is an important technology for image processing and understanding. The structural characteristics of objects and surfaces in an image can be determined by segmenting the image using image domain properties. One of the major advantages of image segmentation is denoising. Denoising is the process of removing unwanted noise from the image. Segmentation specifically attempts to separate structure from noise on a local scale. It is one of the most important steps in computer vision and analysis. For the last three decades lot of work has been reported in literature regarding image segmentation methods (Lucchese L. et al (2001), Srinivas Y. and Srinivas Rao K. (2007), Majid Fakheri et al (2010), Siddhartha Bhattacharyya (2011)). The image segmentation methods can be divided into two categories depending upon the type of image. The images can be broadly categorized into two types namely, gray level images and colour images. A gray level image is usually characterized by pixel intensity (Farag A..A.. et al (2004), Seshashayee M. et al (2011), Srinivas Yerramalle et al (2010)). But in colour images the colour is a perceptual phenomenon related to human response to different wavelengths in the visible electro-magnetic spectrum. In colour images the features that represent the image pixel are highly influenced by three feature descriptions namely, intensity, colour and texture. Among these features colour is the most important one in segmenting the colour images since intensity and texture features also be embedded in colour features. (Fesharaki and Hellestrand (1992), Kato Z. et al (2006), Kang Feng et al (2009), Kaikuo Xu et al (2011)). A better colour space than the RGB space in representing the colours of human perception is the HSI space, in which the colour information is represented by Hue and Saturation values. Thus the human perception of image can be characterized through a bivariate random variable consisting of Hue and Saturation which can be measured using generic structure of a colour appearance model (Sangwine et al (1998)). Ferri and Vidal (1992), Lee E. et al(2010), Dipti P. and Mridula J. (2011) and others have reviewed colour image segmentation techniques. Among these mage segmentation is a process of extracting useful information from the images through features and dividing the whole image into various homogeneous groups in which, the pixels within the group are more homogeneous and are heterogeneous between the I model based image segmentation methods are more efficient than the edge based or threshold or region based methods (Lucchese L. et al (2001)). In model based image segmentation the whole image is divided into different image regions and each image region is characterized by a suitable probability distribution. For ascribing a probability model to the feature vector of the pixels in the image region, it is needed to study the statistical characteristics of the feature vector. In image segmentation it is customary to consider that the whole image is characterized by a finite Gaussian mixture model. That is, the feature vector of each image region follows a Gaussian distribution (Haralick and Shapiro (1985) 2010)). The image segmentation methods based on Gaussian mixture model work well only when the feature vector of the pixels are having infinite range and the distribution of the feature vector is symmetric and meso-kurtic. But in many colour images the feature vector represented by Hue and Saturation are having finite values (say nonnegative) and may not be mesokurtic and symmetric. Hence, to have an accurate image segmentation of these sorts of colour images it is needed to develop and analyze image segmentation methods based on truncated bivariate mixture distributions. Here, it is assumed that the feature vector in different image regions follows a left truncated bivariate Gaussian distribution and the feature vector of the whole image is characterized by a finite left truncated bivariate Gaussian mixture model. This assumption is made since the Hue and Saturation values of the pixel which represents the bivariate feature vector can take nonnegative values only. Hence, the range of the Hue and Saturation values are to be left truncated at zero. The effect of the truncated nature of Hue and Saturation cannot be ignored, since the leftover probability is significantly higher than zero in the left tail end of the distribution. This left truncated nature of the bivariate feature vector can approximate the pixels of the colour image more close to the reality. In this method of segmentation, the number of image regions is obtained by Using the estimated joint probability density functions of the feature vector of pixels of each image, the images are retrieved. The efficiency of the developed algorithm in image retrieval is also studied by computing the image quality metrics like maximum distance, image fidelity, mean square error, signal to noise ratio and image quality index and the results are presented. A comparative study of these quality measures with those obtained from the Gaussian mixture model with K-means revealed that this algorithm performs better. # II. FINITE LEFT TRUNCATED BIVARIATE GAUSSIAN MIXTURE MODEL The effect of truncation in bivariate Gaussian distribution has been discussed by several researchers (Norman L.Johnson, Samuel Kotz and Balakrishnan (1994)). The probability density function of the left truncated Gaussian distribution (truncated at zero) is, 0 0 ; ( , ) ( , ; ) , 0 0 ( , ) x y f x y g x y f x y dxdy ? ? ? < < < < ? ? = ? ? (1) Where, zero is the truncation point for both the Hue and saturation, ( , ) f x y is the probability density function of the bivariate Normal distribution is 2 2 1 1 2 2 2 2 1 1 2 2 1 2 1 1 2 1 ( , ) exp 2(1 ) 2 x x y y f x y µ µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? 1 2 1 1 < < + ; < y < + , 0 ; 0 ; < < , x ? ? ? ? ? ? ? > > ? ? ? 1 2 < < + ; < < + µ µ ? ? ? ? ? ? The value of 0 0 1 ( , ) f x y dxdy ? ? ? ? ? ? ? ? ? ? ? is significant based on the values of the parameters. This distribution includes the skewed, asymmetric bivariate distributions as particular cases for limiting and specific values of the parameters. The various shapes of the frequency curves of the left truncated bivariate Gaussian distribution are shown in Figure1. Fig1 : Shapes of left truncated bivariate Gaussian frequency surfaces Following the heuristic arguments given by Bengt Muthen (1990), the mean value of 'X'(hue) is obtained as E(X) = 1 µ + 1 ? A(3) Where, 1 1 1 2 1 2 A = 1 1 1 1 1 2 1 2 c c µ µ µ µ µ µ ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? and c = ( ) 1/ 2 2 1 ? ? ? , ? , ? are the ordinate and area of standard Normal distribution. Similarly the mean value 'Y'(saturation) is E(Y) = 2 µ + 2 ? B (4) Where, 2 2 2 1 2 1 2 2 1 2 1 B = 1 1 2 c c µ µ µ µ µ µ ? ? ?? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? and c is as given in equation ( 3) The Variance of X is V(X) = 2 1 ? R -2A 1 ? A +A 2 = 2 1 ? R -A 2 (2 1 ? -1)(5) Where, 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - - - - - - - - R ( ) ( ) 1 ( ) -( ) ( ) ( ) 1 ( ) -( ) c c µ µ µ µ µ µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1 1 1 1 1 1 1 - - - ( ) ( ) -( ) , c c µ µ µ ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? and c and A is given in equation . (3) The Variance of Y is V(Y) = 2 2 ? T -2B 2 ? B +B 2 = 2 2 ? T -B 2 (2 2 ? -1) (6) Where, 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 - - - - - - - - T ( ) ( ) 1 ( ) -( ) ( ) ( ) 1 ( ) -( ) c c µ µ µ µ µ µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1 2 2 2 2 2 2 - - - ( ) ( ) -( ) , c c µ µ µ ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? c and B are as given in equations ( 3) and ( 4) respectively. The Covariance of (X, Y) is COV (X, Y) = 1 2 U ? ? -AB ( 1 ? + 2 ? -1)(7) where, 1 1 1 2 1 1 2 1 1 1 2 1 1 2 1 - - - - - - - U ( ) ( )1 ( ) -( ) ( ) ( ) -( ) c c c µ µ µ µ µ µ µ ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2 2 1 2 2 2 1 2 - - - - ( ) ( ) 1 ( ) -( ) , c µ µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? c, A and B are as given in equations ( 3) and ( 4) respectively. Since the entire image is a collection of regions, which are characterized by left truncated bivariate normal distribution, it can be characterized through a K-Component finite left truncated bivariate Gaussian distribution and its probability density function is of the form 1 ( , ) ( , ; ) K i i i i i h x y g x y ? ? = = ?(8) Where, K is the number of regions, i ? >0 are weights such that 1 1 K i i ? = = ? and { } 2 2 1 2 1 2 = , , , , i i i i i ? µ µ ? ? ? is the set of parameters. ( , / ) i i i i g x y ? given in equation (1) represent the probability density function of the i th image region. i ? is the probability of occurrence of the i th component of the finite left truncated bivariate Gaussian mixture model (FLTBGMM) i.e., the probability that the feature belongs to the i th image region. The mean vector representing the entire image is T ( ) 1 (W ) ( ) 1 K E X i i i E K E Y i i i ? ? ? ? ? ? ? = ? ? = ? ? ? ? ? = ? ?(9) Where, ( ) L (?) = 1 ( , ; ) N s s s h x y ? = ? = 1 1 ( ( , ; )) N K i i s s s i g x y ? ? = = ? ? (10) ( ) 2 2 1 1 2 2 2 1 1 2 2 1 1 2 1 2 0 0 1 2 exp 2 (1 ) 2 1 , ; N s i s i s i s i i i i i i i s i i i i i K x x y y i f x y dxdy µ µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? = = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? ? ? ? This implies log L (? ) = 1 1 log( ( , ; )) s s N K g x y i i s i ? ? ? ? = =(11) The updated equations of EM-algorithm for estimating the model parameters are ) , ; ) N l l k k s s s t x y N ? ? + = ? ? = ? ? ? ( )( ) ( ) ( ) 1 1 ( , ; ) 1 (l l k k s s K l l s k i s s i N g x y N g x y ? ? ? ? = = ? ? ? ? = ? ? ? ? ? ? ? ?(12) Where, ( ) ( , ; ) l k s s g x y ? is as given in equation (1). For updating 1k µ we have, ( ) ( ) ( 1) ( ) ( ) ( ) ( ) ( ) 2 1 1 ( ) 1 1 1 2 ( ) ( , ; ) ( , ; ) ( , ; ) A B 0 l l N N N l l l l l l k s k k k s s k s s s k s s k k l s s s k y t x y t x y x t x y ? ? ? µ µ ? ? ? + = = = ? ? ? ? ? ? + + ? = ? ? ? ? ? ? ? ? ? ? ? (13) Similarly for updating 2k µ , we have , ( ) ( ) ( 1) ( ) ( ) ( ) ( ) ( ) 1 2 2 ( ) 1 1 1 1 ( ) ( , ; ) ( , ; ) ( , ; ) B A 0 l l N N N l l l l l l k s k k k s s k s s s k s s k k l s s s k x t x y t x y y t x y ? ? ? ? µ µ ? ? ? + = = = ? ? ? ? ? ? + + ? = ? ? ? ? ? ? ? ? ? ? ? (14) Where, ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )1 , ; , ; , ; , ; , ; l l l l k k s s k k s s l k s s K l l l s s i i s s i g x y g x y ? ? ? ? ? ? = = = ? , ( ) 2 2 1 1 2 2 2 1 1 2 2 ( ) 2 1 2 0 0 1 exp 2 2(1 ) ( , ; ) = 2 1 , ; s k s k s k s k k k k k k k l k s s k k k k x x y y g x y f x y dxdy ? ? µ µ µ µ ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? A and B are as given in equations ( 3) and ( 4) respectively. The updated equations for 2 1k ? at ( 1) l + th iteration is, ( ) 2 ( 1) ( ) 1 2 2 ( ) ( 1) 1 1 ( ) 1 ( ) ( ) ( )( ) ( , ; ) + 0 k k l k l l k k l s s s k s s l s k l k N l l k x x y t x y D E ? µ ? µ µ ? ? ? ? + + = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? = ? ? ? ? ? ? ? ? ? ? ? ? ?(15) Where, ( ) ( , ; ) k s s l t x y ? is given in equation ( 14), ( ) 1 1 1 1 1 2 1 2 1 1 1 2 1 1 1 2 1 1 1 1 - - - ( ) -( ) - - - 1 (-) 1 ( ) -( ) , k k k k k k k k k k k k k k k k k k k k k k D c c c µ µ µ ?? ? ? ? ? ? ? ? ? ? ? µ µ µ ? ? µ ? ? ? ? ? ? = + + + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? And 1 2 1 2 1 1 2 1 1 2 1 2 1 2 1 2 2 1 2 - - - ( ) - ) - - - (-) 1 ( ) -( ) k k k k k k k k k k k k k k k k k k k k k E c c k c µ µ µ ? ?? ? ? ? ? ? ? ? ? ? µ µ µ ? ? µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ( ) 1 2 1 2 1 1 2 1 - - - + - 1 ( ) -( ) k k k k k k k k k k c µ µ µ ? ? µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? The updated equations for 2 2k ? at ( 1) l ( , ; ) ( , ; ) + th iteration is 1 ( ) 2 ( ) ( 1) 1 2 2 ( )( 1) 1 2 ( ) 1 ( ) ( ) ( )( )+ 0 k k l k l l k k l s s s k s s l s k l k N l l k y x y t x y G E ? µ ? µ µ ? ? ? ? + + = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? ? ? ?(16) ( )k s s l t x y ? and E are as given in equations ( 14) and ( 15) respectively and Therefore the updated equation for estimating k ? is ( ) ( ) ( ) ( ) 2 2 1 2 2 2 1 2 2 2 1 2 2 2 2 2 22 1 2 ( ) 1 1 1 (1 1 1 ( , ; ) = () ) 0 1 k s k s k k s s k k k k s k s k k k k k k k k N l x y t x y s x y D F E ? ? µ µ ? ? ? ? µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? + ? ? + + ? + + = ? ? ?? ? ? ? ? ? ? ? ? ?? ? ? ? ?(17) Where, ( ) ( , ; ) k s s l t x y ? , D, E and G are as given in equations ( 14) ,( 15) and ( 16) respectively and . where, ( ) 2 2 2 2 2 2 2 2 2 2 2 2 2 1 21 2 1 1 ( ) - ( ) 2 1 (- ) ( ) - ( ) , - - - -- - 1 k k k k k k k k k k k k k k k k k k c c k k c k k G ?? ? ? ? ? ? ? ? ? ? ? ? µ µ µ ? ? ? µ µ µ µ ? ? ? ? ? ? = + ? ? ? ? ? + + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2011 October ( ) 2 2 2 2 1 2 2 2 2 1 2 - - - ( ) 1 ( ) -( ) k k k k k k k k k k k c µ µ µ ? ? µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + 1 2 2 2 2 2 2 - - - ( ) ( ) -( ) k k k k k k k k k c c µ µ µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Solving equations ( 12), ( 13), ( 14), ( 15), ( 16) and ( 17 2000)). This method performs well, if the sample size is large, and the computation time is heavily increased. When the sample size is small, some small regions may not be sampled. To overcome this problem, we use K-means algorithm to divide the whole image into various homogeneous regions. In K-means algorithm the centroids of the clusters are recomputed as soon as pixel joins the cluster. The initial values of i ? can be taken as i ? = K 1 , where, K is the number of image regions obtained from the K-means algorithm (Rose H. Turi (2001)). K-means algorithm uses an iterative procedure that minimizes the sum of distances from each object to its cluster centroid, over all clusters. This procedure consists of the following steps. 1) Randomly choose K data points from the whole dataset as initial clusters. These data points represent initial cluster centroids. 2) Calculate Euclidean distance of each data point from each cluster centre and assign the data points to its nearest cluster centre. 3) Calculate new cluster centre so that squared error distance of each cluster should be minimum. 4) Repeat step 2 and 3 until clustering centers do not change. 5) Stop the process. The efficiency of the EM-algorithm in estimating the parameters is heavily dependent on the number of regions in the image. The number of mixture components taken for K-means algorithm is obtained, # IV. INITIALIZATION OF PARAMETERS BY K- # MEANS In the above algorithm, the cluster centers are only updated once all points have been allocated to their closed cluster centre. The advantage of K -means are that it is a very simple method, and it is based on intuition about the nature of a cluster, which is that the within cluster error should be as small as possible. The disadvantage of this method is that the number of clusters must be supplied as a parameter, leading to the user having to decide what the best number of clusters for the image is ( ? for each image region and with the method of moments given by Bengt Muthen (1990) for Truncated Bivariate Normal Distribution with initial parameters as ? i = 1/K for i=1,2,?,K 1k µ = 1k x is the k th region sample mean of the Hue angle. Substituting these values as the initial estimates, we obtain the refined estimates of the parameters by using the EM-algorithm. V. # SEGMENTATION ALGORITHM After refining the parameters the prime step is image segmentation, by allocating the pixels to the segments. This operation is performed by segmentation algorithm. The image segmentation algorithm consists of four steps ( ) 2 2 2 1 2 1 2 2 2 2 - - - ( ) 1 ( ) -( ) k k k k k k k k k k k F c µ µ µ ?? ? ? µ ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . That is ( ) 2 2 1 1 2 2 2 1 1 2 2 2 1 2 0 0 1 exp 2 2(1 ) max 2 1 , , s k s k s k s k k k k k k k j j k k k k k x x y y L f x y dxdy µ µ µ µ ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? VI. # EXPERIMENTAL RESULTS To demonstrate the utility of the image segmentation algorithm developed in this paper, an experiment is conducted with six images taken from Berkeley images dataset (http://www.eecs. Step4) Assign each pixel into the corresponding j th region (segment) according to the maximum likelihood of the j th component L j . ![given in equations (3) and (4) for the i th image region.III.ESTIMATION OF THE MODEL PARAMETERS BY EM-ALGORITHMTo obtain the estimation of the model parameters, we utilized the EM-algorithm by maximizing the expected likelihood function for carrying out the EMalgorithm. The likelihood function of bivariate 2011 October © 2011 Global Journals Inc. (US) . Studies on Colour Image Segmentation Method Based on Finite Left Truncated Bivariate Gaussian Mixture Model with K-Means observations 1 1 2 2 3 3 ( , ), ( , ), ( , ),..., ( , ) Image Segmentation Method Based on Finite Left Truncated Bivariate Gaussian Mixture Model with K-Means](image-2.png "") 1![](image-3.png "1") ![Studies on Colour Image Segmentation Method Based on Finite Left Truncated Bivariate Gaussian Mixture Model with K-Means Global Journal of Computer Science and Technology Volume XI Issue XVIII Version I 25 2011 October © 2011 Global Journals Inc. (US)](image-4.png "") 1k![is the k th region sample mean of the Saturation. 1k ? = s 1k (Sample Standard Deviation of the k th segment of Hue -angle) 2k ? = s 2k (Sample Standard Deviation of the k th segment of -Saturation) k ? is the correlation coefficient between Hue and Saturation of the k th image region.](image-5.png "2k µ = 1k y") 12212![Plot the histogram of the whole image. Step2) Obtain the initial estimates of the model parameters using K-Means algorithm and moment estimators as discussed in section IV. Step3) Obtain the refined estimates of the model parameters 1k µ , 2kµ , ?,K using the EM-algorithm with the updated equations given in section III.](image-6.png "Step 1 ) 2 1k ? , 2 2k ? , k ? and k ? for i= 1 , 2 ,") ![berkeley.edu /Research/Projects/CS/vision/bsds/BSDS300/html/datas et/images.html.). The images namely, OSTRICH, POT, TOWER, BEARS, DEER and BIRD are considered for image segmentation. The feature vectors of the whole image is taken as input for image segmentation. The feature vector of the image are assumed to follow a mixture of left truncated bivariate Gaussian distribution. That is, the image contains K regions and the feature vector of the each image region follow a left truncated bivariate Gaussian distribution with different = parameters. The number of segments in each of the six images considered for experimentation is determined by the histogram of pixel intensities. The histograms of the six images are shown in Figure 2.](image-7.png "") 222![Figure 2 : Histograms Of The ImagesThe initial estimates of the number of regions K in each image are obtained and given in Table1.Table1 : Initial Estimates of K](image-8.png "Figure 2 : 2 1i ? , 2 2i?") ![](image-9.png "") and k ? .? 1k µ , 2k µ , 2 1k ? , 2 ? 2kby plotting the histogram of the pixel intensities of thewhole image. The mixing parameter k ? and the regionparameters 1k µ , 2k µ , 2 1k ? , 2 2k ? , k ? are unknown asprior. A commonly used method in initializingparameters is by drawing a random sample in the entireimage data (Mclanchan G. and Peel D. ( © 2011 Global Journals Inc. (US) © 2011 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XI Issue XVIII Version I October © 2011 Global Journals Inc. (US) © 2011 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XI Issue XVIII Version I 28 Substituting the final estimates of the model parameters, the probability density function of the feature vector of each image are estimated. Using the estimated probability density functions and the image segmentation algorithm given in section V, the image segmentation is done for each of the six images under consideration. The original and segmented images are shown in Figure 3. ## PERFORMANCE EVALUATION After conducting the experiment with the image segmentation algorithm developed in this paper, its performance is studied. The comparison is based on three performance measures namely, Probabilistic Rand Index (PRI) given by Unnikrishnan R. and et.al (2007), the Variation of Information (VOI) given by Meila M. (2005), and Global Consistency error (GCE) given by Martin D. and et al (2001). The objective of the segmentation methods are based on regional similarity measures in relations to their local neighborhood. The performance of developed algorithm using finite left truncated bivariate Gaussian mixture model (FLTBGMM) is studied by computing the segmentation performance measures namely, PRI, GCE and VOI for the six images under study. The computed values of the performance measures for the developed algorithm and the earlier existing finite Gaussian mixture model (GMM) with K-means algorithm are presented in Table 3 for a comparative study. 4, it is observed that all the image quality metrics for the images are meeting the standard criteria. This implies that using the proposed algorithm the images are retrieved accurately. A comparative study of proposed algorithm with that of algorithm based on finite Gaussian mixture model (GMM) and Finite left truncated bivariate Gaussian mixture model with K-means reveals that the mean square error of the proposed model is less than that of the finite GMM and FLTBGMM. Based on all other quality metrics also it is observed that the performance of the proposed model in retrieving the images is better than the finite Gaussian mixture model. ## VIII. ## CONCLUSION In this paper we introduce a novel and new colour image segmentation method based on left truncated bivariate Gaussian mixture model. Here it is assumed that the colour image is characterized by HSI colour space, in which the Hue and Saturation values are non negative. they are characterized by left truncated Bivariate Gaussian mixture model. The left truncated bivariate Gaussian distribution includes the Bivariate Gaussian distribution is a limiting case when the truncation points tends to infinite. It also includes several platy, meso, lefty and skewed distributions as particular cases for different values of the parameters. The model parameters are estimated by using EMalgorithm. The initialization and the number of image segments are determined through K-means algorithm and moment method of estimation. The segmentation algorithm is developed with component maximum likelihood. The experimentation with six colour images reveals that this algorithm outperforms the existing algorithms in both image segmentation and image retrievals. The image quality metrics also supported the utility of the proposed algorithm. It is possible to develop image segmentation algorithm with finite mixture of doubly truncated multivariate Gaussian distribution with more image features which require further investigations. * Moments of the censored and truncated bivariatenormal distribution BengtMuthen British Journal of Mathematical and Statistical psychology 43 1990 * Combining GLCM Features and Markov Random Field Model for Colour Textured Image Segmentation DiptiPatra JMridula KKumar Int. Conf. on Devices and Communications (ICDeCom) 2011 * Image Quality Measures and their Performance AMEskicioglu PSFisher IEEE Transactions On comm 43 12 1995 * Precise Image Segmentation by Iterative EM-Based Approximation of Empirical Grey Level Distributions with Linear Combinations of Gaussians AAFarag AEl-Baz GGimelfarb Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW'04) the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW'04) 2004 * Colour image segmentation and labelling through multieditcondensing FFerri EVidal pattern Recognition Letters 13 8 1992 * Realtime color image segmentation MNFesharaki GRHellestrand SCS&E 9316 1992 Australia Univ. of New South Wales Technical Report * Survey: Image segmentation Techniques ShapiroHaralick CVGIP 29 1985 * An MDL Approach to Color Image Segmentation KaikuoXu HongweiZhang TianyunYan WeiWei Shaomin WenQiang International Conference on Multimedia and Signal Processing 2011 2 * Global Journal of Computer Science and Technology Volume XI Issue XVIII Version I 30 * Flame Color Image Segmentation Based on Neural Network KangFeng WangYaming ZhaoYun International Forum on Computer Science-Technology and Applications 2009 * A Markov random field image segmentation model for color textured images ZKato PongTing-Chuen Image and Computing Vision 24 10 2006 * Color shift model-based image enhancement for digital multifocusing based on a multiple color-filter aperture camera ELee WKang SKim JPaik IEEE Trans. On Consumer Electronics 2 2010 * Color image segmentation: A state-of art survey LLucchese SKMitra Proc. Indian National Science Academy (INSA-A) Indian National Science Academy (INSA-A) 2001 67 * EM segmentation algorithm for colour image retrieval Majid Fakheri TSedghi MCAmirani 6th Iranian Conference on Machine Vision and Image Processing 2010 * A survey of genentic algorithms applications for image enhancement and segmentation MantasPaulinas AndriusUsinskas Information Technology and control 36 3 2007 * A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics DMartin CFowlkes DTal MalikJ proc. 8th Int. Conference Computer vision 8th Int. Conference Computer vision 2001 2 * GMclanchlan TKrishnan The EM Algorithm and Extensions New York John Wiley and Sons 1997. 1997 * AMclanchlan GPeel D The EM Algorithm For Parameter Estimations New York John Wileyand Sons 2000. 2000 * Comparing Clustering -An axiomatic view MMeila proc.22nd Int. Conf. Machine Learning .22nd Int. Conf. Machine Learning 2005 * Continuous Univariate Distributions" Volume-I NormanLJohnson SamuelKortz Balakrishnan 1994 John Wiley and Sons Publications New York * Image Segmentation using Gaussian Mixture Models RahmanFarnoosh GholamhosseinYari BehnamZarpak IUST International Journal of Engineering Science 19 2008. No.1-2, 2008 * Cluster Based Image Segmentation HRose Turi 2001 Australia Monash University phd Thesis * The Colour Image Processing Hand Book SJSangwine RE NHorne 1998 * Image segmentation based on a finite generalized new symmetric mixture model with K-means MSeshashayee KSrinivasa Rao SrinivasaSatyanarayana Ch PRao Int. J. Computer Science Issues 3 8 2011 * Image Segmentation-A state-of-Art Survey for Prediction MShital Raut Raghuvanshi RDharaskar ARaut Advanced Computer control, ICACC'09. International Conference 2009 * A Brief Survey of Color Image Preprocessing and Segmentation Techniques SiddharthaBhattacharyya Journal of Pattern Recognition Research 2011 * Unsupervised image segmentation using finite doubly truncated gaussian mixture model and Hierarchical clustering SrinivasY Srinivas Journal of Current Science 93 4 2007 * Unsupervised Image Segmentation Based on Finite Generalized Gaussian Mixture Model With Hierarchical Clustering YSrinivas KSrinivasa Rao PrasadReddy PV G D International journal for Computational vision and Biomechanics 3 1 2010 * Color image segmentation using Adaptive Spatial Gaussian Mixture Model MSujaritha SAnnadurai International journal of signal processing 6 1 2010 * Toward objective evaluation of image segmentation algorithms RUnnikrishnan CPantofaru MHernbert IEEE Trans. Pattern Annl.Mach.Intell 29 6 2007