# Introduction lustering is one of the most popular approaches to unsupervised pattern recognition. Fuzzy C-Means (FCM) algorithm [8] is a typical clustering algorithm, which has been widely utilized in engineering and scientific disciplines such as medicine imaging, bioinformatics, pattern recognition, and data mining. As the basic FCM clustering approach employs the squared-norm to measure similarity between prototypes and data points, it can be effective in clustering only the 'spherical' clusters and many About 1 -Department of Computer Technology, Kongu Engineering College, Perunudai-638 052, Tamilnadu,, INDIA E-Mail:vanisri_raja@rediffmail.com, Tel +91-99427-66266 About 2 -Principal, Maharaja Arts and Science College, Coimbatore, Tamilnadu, INDIA E-Mail:clogu@rediffmail.com algorithms are derived from the FCM to cluster more general dataset. FCM approach is very sensitive to noise. To avoid such an effect, Krishnapuram and Keller[1] removed the constraint of memberships in FCM and propose the Possibilistic C-Means (PCM) algorithm [15]. To classify a data point they deducted an approach that the data point must closely have their cluster centroid, and it is the role of membership. Also for the centroid estimation, the typicality is used for alleviating the unwanted effect of outliers. So Pal proposed a clustering algorithm called Fuzzy Possibilistic C-Means (FPCM) that combines the characteristics of both fuzzy and possibilistic c-means [9]- [14]. In order to enhance the FPCM, Modified Fuzzy Possibilistic C-Means (MFPCM) approach is presented. This new approach provides better results compared to the previous algorithms by modifying the Objective function used in FPCM. The objective function is enhanced by adding new weight of data points in relation to every cluster and modifying the exponent of the distance between a point and a class. The existing approach use the probabilistic constraint to enable the memberships of a training sample across clusters that sum up to 1, which means the different grades of a training sample are shared by distinct clusters, but not as degrees of typicality. In contrast, each component created by FPCM belongs to a dense region in the data set. Each cluster is independent of the other clusters in the FPCM strategy. Typicalities and Memberships are very important factors for the correct feature of data substructure in clustering problem. If a training sample has been effectively classified to a particular suitable cluster, then membership is considered as a better constraint for which the training sample is closest to this cluster. In other words, typicality is an important factor to overcome the undesirable effects of outliers to compute the cluster centers. In order to enhance the above mentioned existing approach in MFPCM, penalized and compensated constraints are incorporated. Yang [16] and Yang and Su [17] have added the penalized term into fuzzy c-means to construct the penalized fuzzy cmeans (PFCM) algorithm. The compensated constraint is embedded into FCM by Lin [18] to create compensated fuzzy c-means (CFCM) algorithm. In this paper the penalized and compensated constraints are combined with the MFPCM which is said to be Penalized and Compensated constraints based Modified Fuzzy Possibilistic C-Means clustering algorithm (PCMFPCM). The remainder of this paper is organized as follows. Section II discusses the various related works to the approach discussed in this paper. Section III presents the proposed methodology. Experimental studies with two datasets are given in section 4 and section 5 concludes the paper. # II. # Related works Clustering is found to be the widely used approach in most of the data mining systems. Compared with the clustering algorithms, the Fuzzy c means approach is found to be efficient and this section discusses some the literature studies on the fuzzy probabilistic c means approach for the clustering problem. In 1997, Pal et al., proposed the Fuzzy-Possibilistic C-Means (FPCM) algorithm that generated both membership and typicality values when clustering unlabeled data. The typicality values are constrained by FPCM so that the sum of the overall data points of typicalities to a cluster is one. For large data sets the row sum constraint produces unrealistic typicality values. In this paper, a novel approach is presented called possibilistic-fuzzy c-means (PFCM) model. PFCM produces memberships and possibilities concurrently, along with the usual point prototypes or cluster centers for each cluster. PFCM is a hybridization of fuzzy cmeans (FCM) and possibilistic c-means (PCM) that often avoids various problems of PCM, FCM and FPCM. The noise sensitivity defect of FCM is resolved in PFCM, overcomes the problem of coincident clusters of PCM and purges the row sum constraints of FPCM. The first-order essential conditions for extrema of the PFCM objective function is driven, and used them as the basis for a standard alternating optimization approach to finding local minima of the PFCM objective functional. With Some numerical examples FCM and PCM are compared to PFCM in [1]. The examples illustrate that PFCM compares favorably to both of the previous models. Since PFCM prototypes are fewer sensitive to outliers and can avoid coincident clusters, PFCM is a strong candidate for fuzzy rule-based system identification. Xiao-Hong et al., [3] presented a novel approach on Possibilistic Fuzzy c-Means Clustering Model Using Kernel Methods. The author insisted that fuzzy clustering method is based on kernel methods. This technique is said to be kernel possibilistic fuzzy cmeans model (KPFCM). KPFCM is an improvement in possibilistic fuzzy c-means model (PFCM) which is superior to fuzzy c-means (FCM) model. The KPFCM model is different from PFCM and FCM which are based on Euclidean distance. The KPFCM model is based on non-Euclidean distance by using kernel methods. In addition, with kernel methods the input data can be mapped implicitly into a high-dimensional feature space where the nonlinear pattern now appears linear. KPFCM can deal with noises or outliers better than PFCM. The KPFCM model is interesting and provides good solution. The experimental results show better performance of KPFCM. Ojeda-Magafia et al., [4] proposed a new technique to use the Gustafson-Kessel (GK) algorithm within the PFCM (Possibilistic Fuzzy c-Means), such that the cluster distributions have a better adaptation with the natural distribution of the data. The PFCM, proposed by Pal et al. on 2005, introduced the fuzzy membership degrees of the FCM and the typicality values of the PCM. However, this algorithm uses the Euclidian distance which gives circular clusters. So, combining the GK algorithm and the Mahalanobis measure for the calculus of the distance, there is the possibility to get ellipsoidal forms as well, allowing a better representation of the clusters. Chunhui et al., [6] presented a similarity based fuzzy and possibilistic c-means algorithm called SFPCM. It is derived from original fuzzy and possibilistic-means algorithm (FPCM) which was proposed by Bezdek. The difference between the two algorithms is that the proposed SFPCM algorithm processes relational data, and the original FPCM algorithm processes propositional data. Experiments are performed on 22 data sets from the UCI repository to compare SFPCM with FPCM. The results show that these two algorithms can generate similar results on the same data sets. SFPCM performs a little better than FPCM in the sense of classification accuracy, and it also converges more quickly than FPCM on these data sets. Yang et al., [5] puts forth an unlabeled data clustering method using a possibilistic fuzzy c-means (PFCM). PFCM is the combination of possibilistic cmeans (PCM) and fuzzy c-means (FCM), therefore it has been shown that PFCM is able to solve the noise sensitivity issue in FCM, and at the same time it helps to avoid coincident clusters problem in PCM with some numerical examples in low-dimensional data sets. Further evaluation of PFCM for high-dimensional data is conducted in this paper and presented a revised version of PFCM called Hyperspherical PFCM (HPFCM). The original PFCM objective function is modified, so that cosine similarity measure could be incorporated in the approach. When compared their performance with some of the traditional and recent clustering algorithms for automatic document categorization the FPCM performs better. The study shows HPFCM is promising for handling complex high dimensional data sets and achieves more stable performance. The remaining problem of PFCM approach is also discussed in this research. A robust interval type-2 possibilistic C-means (IT2PCM) clustering algorithm is presented by Long Yu et al., [6] which is essentially alternating cluster estimation, but membership functions are selected with interval type-2 fuzzy sets by the users. The cluster prototypes are computed by type reduction combined with defuzzification; consequently they could be directly extracted to generate interval type-2 fuzzy rules that can be used to obtain a first approximation to the interval type-2 fuzzy logic system (IT2FLS). The IT2PCM clustering algorithm is robust to uncertain inliers and outliers, at the same time provides a good initial structure of IT2FLS for further tuning in a subsequent process. [8] and this is widely used in pattern recognition. The algorithm is an iterative clustering approach that brings out an optimal c partition by minimizing the weighted within group sum of squared error objective function JFCM: ?? ?????? (??, ??, ??) = ? ? ?? ???? ?? ?? ?? =1 ?? ??=1 ??? ?? ?? , ?? ?? , 1 < ?? < +? (1) In the equation X = {x1, x2,...,xn} ? Rp is the data set in the p-dimensional vector space, the number of data items is represented as p, c represents the number of clusters with 2 ? c ? n -1. V = {v1, v2, . . . ,vc} is the c centers or prototypes of the clusters, vi represents the p-dimension center of the cluster i, and d2(xj, vi) represents a distance measure between object xj and cluster centre vi. U = {?ij} represents a fuzzy partition matrix with uij = ui (xj) is the degree of membership of xj in the ith cluster; xj is the jth of pdimensional measured data. The fuzzy partition matrix satisfies: 0 ? ? ?? ???? ?? ?? =1 ? ??, ??? ? {1, ? , ??}(2)? ?? ???? ?? ??=1 = 1, ??? ? {1, ? , ??}(3) m is a weighting exponent parameter on each fuzzy membership and establishes the amount of fuzziness of the resulting classification; it is a fixed number greater than one. Under the constraint of U the objective function JFCM can be minimized. Specifically, taking of JFCM with respect to uij and vi and zeroing them respectively, is necessary but not sufficient conditions for JFCM to be at its local extrema will be as the following: ?? ???? = ?? ? ??? ?? ?? ?? ?? ??? ?? ?? ?? ?? ? 2 (???1) ? ?? ??=1 ? ?1 , 1 ? ?? ? ??, 1 ? ?? ? ??. (4) ?? ?? = ? ?? ???? ?? ?? ?? ?? ??=1 ? ?? ???? ?? ?? ??=1 , 1 ? ?? ? ??. ( )5 In noisy environment, the memberships of FCM do not always correspond well to the degree of belonging of the data, and may be inaccurate. This is mainly because the real data unavoidably involves some noises. To recover this weakness of FCM, the constrained condition (3) of the fuzzy c-partition is not taken into account to obtain a possibilistic type of membership function and PCM for unsupervised clustering is proposed. The component generated by the PCM belongs to a dense region in the data set; each cluster is independent of the other clusters in the PCM strategy. The following formulation is the objective function of the PCM. ?? ?????? (??, ??, ??) = ? ? ?? ???? ?? ?? ?? =1 ?? ??=1 ??? ?? ?? ?? ?? + ? ?? ?? ? (1 ? ?? ?? =1 ?? ??=1 ?? ???? ) ?? (6) Where ?? ?? = ? ?? ???? ?? ||?? ?? ??? ?? || 2 ?? ?? =1 ? ?? ???? ?? ?? ?? =1 (7) ?? ?? is the scale parameter at the ith cluster, ?? ???? = 1 1 + ? ?? 2 (?? ?? ,?? ??) ?? ?? ? 1 ?? ?1 (8) ?? ???? represents the possibilistic typicality value of training sample xj belong to the cluster i. m ? [1,?] is a weighting factor said to be the possibilistic parameter. PCM is also based on initialization typical of other cluster approaches. The clusters do not have a lot of mobility in PCM techniques, as each data point is classified as only one cluster at a time rather than all the clusters simultaneously. Consequently, a suitable initialization is necessary for the algorithms to converge to nearly global minimum. The characteristics of both fuzzy and possibilistic c-means approaches is incorporated. Memberships and typicalities are very important factors for the correct feature of data substructure in clustering problem. Consequently, an objective function in the FPCM depending on both memberships and typicalities can be represented as below: ?? ???????? (??, ??, ??) = ? ?(?? ???? ?? + ?? ?? ) ?? ?? =1 ?? ??=1 ??? ?? ?? , ?? ??(9) with the following constraints : ? ?? ???? ?? ??=1 = 1, ??? ? {1, ? , ??}(3)? ?? ???? ?? ?? =1 = 1, ??? ? {1, ? , ??}(10) A solution of the objective function can be obtained through an iterative process where the degrees of membership, typicality and the cluster centers are update with the equations as follows. ?? ???? = ?? ? ??? ?? ?? ?? ?? ??? ?? ?? ?? ?? ? 2 (???1) ? ?? ??=1 ? ?1 , 1 ? ?? ? ??, 1 ? ?? ? ??. (4) ?? ???? = ?? ? ??? ?? ?? ?? ?? ??? ?? ?? ?? ?? ? 2 (???1) ? ?? ?? =1 ? ?1 , 1 ? ?? ? ??, 1 ? ?? ? ??. (11)?? ?? = ? (?? ???? ?? + ?? ???? ?? )?? ?? ?? ??=1 ? (?? ???? ?? ?? ??=1 + ?? ???? ?? ) , 1 ? ?? ? ??.(12) PFCM constructs memberships and possibilities simultaneously, along with the usual point prototypes or cluster centers for each cluster. Hybridization of possibilistic c-means (PCM) and fuzzy c-means (FCM) is the PFCM that often avoids various problems of PCM, FCM and FPCM. The noise sensitivity defect of FCM is solved by PFCM, which overcomes the coincident clusters problem of PCM. But the estimation of centroids is influenced by the noise data. # 2) Modified Fuzzy Possibilistic C-Means Technique (FPCM) Objective function is very much necessary to enhance the quality of the clustering results. Wen-Liang Hung presented a new approach called Modified Suppressed Fuzzy c-means (MS-FCM), which significantly improves the performance of FCM due to a prototype-driven learning of parameter ? [19]. Exponential separation strength between clusters is the base for the learning process of ? and is updated at each of the iteration. The parameter ? can be computed as ?? = ?????? ?? min ????? ||?? ?? ? ?? ?? || 2 ?? ?(13) In the above equation ? is a normalized term so that ? is chosen as a sample variance. That is, ? is defined: ?? = ? |??? ?? ? ??? ?| 2 ?? ?? =1 ?? ????????? ??? = ? ?? ?? ?? ?? =?? ?? But the remark which must be pointed out here is the common value used for this parameter by all the data at each of the iteration, which may induce in error. A new parameter is added with this which suppresses this common value of ? and replaces it by a new parameter like a weight to each vector. Or every point of the data set possesses a weight in relation to every cluster. Consequently this weight permits to have a better classification especially in the case of noise data. The following equation is used to calculate the weight. ?? ???? = ?????? ?? ??? ?? ? ?? ?? ? 2 ?? ??? ?? ? ??? ? 2 ?? ?? =1 ? * ?? ?? ? ?(14) In the previous equation wji represents weight of the point j in relation to the class i. In order to alter the fuzzy and typical partition, this weight is used. The objective function is composed of two expressions: the first is the fuzzy function and uses a fuzziness weighting exponent, the second is possibililstic function and uses a typical weighting exponent; but the two coefficients in the objective function are only used as exhibitor of membership and typicality. A new relation, lightly different, enabling a more rapid decrease in the function and increase in the membership and the typicality when they tend toward 1 and decrease this degree when they tend toward 0. This relation is to add Weighting exponent as exhibitor of distance in the two under objective functions. The objective function of the MFPCM can be given as follows: ?? ?????????? = ? ?(?? ???? ?? ?? ???? ?? ?? 2?? (?? ?? , ??) ?? ?? =1 ?? ??=1 + ?? ???? ?? ?? ???? ?? ?? 2?? (?? ?? , ?? ?? ))(15) U = {?ij} represents a fuzzy partition matrix, is defined as: ?? ???? = ?? ? ??? ?? ?? , ?? ?? ??? ?? ?? , ?? ?? ? 2?? (???1) ? ?? ??=1 ? ?1(16) T = {tij} represents a typical partition matrix, is defined as: ?? ???? = ?? ? ??? ?? ?? , ?? ?? ??? ?? ?? , ?? ?? ? 2?? (???1) ? ?? ??=1 ? ?1(17) V = {vi} represents c centers of the clusters, is defined as: ?? ?? = ? (?? ???? ?? ?? ???? ?? + ?? ???? ?? ?? ???? ?? ) * ?? ?? ?? ?? =1 ? (?? ???? ?? ?? ?? =1 ?? ???? ?? + ?? ???? ?? ?? ???? ?? )(18) 3) Penalized and Compensated constraints based Modified Fuzzy Possibilistic C-Means(PCMFPCM) The Penalized and compensated constraints are embedded with the previously discussed Modified Fuzzy Possibilistic C-Means algorithm. The objective function of the FPCM is given in equation (15). In the proposed approach the penalized and compensated terms are added to the objective function of FPCM to construct the objective function of PCMFPCM. The penalized constraint can be represented as follows 1 2 ?? ? ?(?? ??,?? ?? ?????? ?? + ?? ??,?? ?? ?????? ?? ) ?? ???1 ?? ??=119) Where ?? ?? = ? ?? ??,?? ?? ?? ??=1 ? ? ?? ??,?? ?? ?? ??=1 ?? ??=1 , ?? = 1,2, ? ? ??, ?? ?? = ? ?? ??,?? ?? ?? ??=1 ? ? ?? ??,?? ?? ?? ??=1 ?? ??=1 ?? = 1,2, ? , ?? where ? i is a proportional constant of class i; ? x is a proportional constant of training vector z x , and v (v?0); ? (??0) are also constants. In these functions, ? i and ? x are defined in equations above. Membership ?? ??,?? and typicality ?? ??,?? for the penalize is presented below. To obtain an efficient clustering the penalization term must be removed and the compensation term must be added to the basic objective function of the existing FPCM. This brings out the objective function of PCFPCM and it is given in equation ( 21) The objective function value obtained for clustering the Iris data using the proposed clustering technique and existing clustering techniques is shown in table 1. When considering the class 1, the objective function obtained by using the proposed technique is 10.23 which is lesser than the objective function obtained by K-Means clustering and Genetic algorithm i.e. 10.76 and 10.66 respectively. This clearly indicates that the proposed technique results in better clustering when compared to existing clustering techniques. When class 2 is considered, the objective function for existing methods are 11.12 and 11.01, whereas, for the proposed clustering technique the objective function is 10.67 which are much lesser than conventional methods. The objective function obtained for the class 3 using the proposed technique is 9.96 that is lesser when compared to the usage of K-Means and GA techniques i.e. 10.21 and 10.11. From these data, it can be clearly seen that the proposed technique will produce better clusters when compared to the existing techniques. ?? ?????????? = ? ?( The performance of the proposed and existing techniques in terms of comparison with their objective function is shown in figure 1. It can be clearly observed that the proposed clustering technique results in lesser objective function for the considered all classes of iris dataset when compared to the existing techniques. This clearly indicates that the proposed clustering technique will produce better clusters for the large database when compared to the conventional techniques. V. # Conclusion Fuzzy clustering is considered as one of the oldest components of soft computing which is suitable for handling the issues related to understandability of patterns, incomplete/noisy data, and mixed media information and is mainly used in data mining technologies. In this paper, a penalized and compensated constraints based Fuzzy possibilistic c-Means clustering algorithm is presented, which is developed to obtain better quality of clustering results. The need for both membership and typicality values in clustering is argued, and clustering model named as PCMFPCM is proposed in this paper. The proposed PCMFPCM approach differ from the conventional FPCM, PFCM, and CFCM by imposing the possibilistic reasoning strategy on fuzzy clustering with penalized and compensated constraints for updating the grades of membership and typicality. The experimental results shows that the proposed PCMFPCM approach performs better clustering and the value of objective function is very much reduced when compared to the conventional fuzzy clustering approaches. # References Références Referencias # Experimental results The proposed approach for clustering unlabeled data is experimented using the Iris dataset from the UCI machine learning Repository. All algorithms are implemented under the same initial values and stopping conditions. The experiments are all performed on a GENX computer with 2.6 GHz Core (TM) 2 Duo processors using MATLAB version 7.5. Iris data set contains 150 patterns with dimension 4 and 3 classes. This is one of the most The centroid of ith cluster is calculated in the similar way as the definition in Eq. (18). The final objective function is presented in equation ( 21). ![?? ??,?? ) ?? = ?? (??? ?? ? ?? ?? ? 2 ? ?? ?????? ?? ) 1 (?? ?1) ? (??? ?? ? ?? ?? ? 2 ? ?? ?????? ?? ) 1 (?? ?1) 2, ? , ??, ?? = 1,2, ? ??, ??? ??,?? ? ?? = ?? (??? ?? ? ?? ?? ? 2 ? ?? ?????? ?? ) 1 (?? ?1) ? (??? ?? ? ?? ?? ? 2 ? ?? ?????? ?? ) 1 (?? ?1) 2, ? , ??, ?? = 1,2, ? ??,In the previous expression ?? ?? = ?? ?? = ?? ? ??. which is the centroid. The compensated constraints can represented as follows 1 2 ?? ? ?(?? ??,?? ?? ??????? ?? ?? + ?? ??,?? ?? ??????? ?? ?? ) ?? ??,?? and typicality ?? ??,?? for the compenzation is presented below(?? ??,?? ) ?? = ?? (??? ?? ? ?? ?? ? 2 ? ?? tanh (?? ?? )) 1 (?? ?1) ? (??? ?? ? ?? ?? ? 2 ? ?? ???????(?? ?? )) 1 (?? ?1) 2, ? , ??, ?? = 1,2, ? ??, ??? ??,?? ? ?? = ?? (??? ?? ? ?? ?? ? 2 ? ?? ???????(?? ?? )) 1 (???1)? (??? ?? ? ?? ?? ? 2 ? ?? tanh (?? ?? )) 1 (???1) 2, ? , ??, ?? = 1,2, ? ??,](image-2.png "(") Sreenivasarao et al., [2] presented aComparative Analysis of Fuzzy C-Mean and ModifiedFuzzy Possibilistic C -Mean Algorithms in Data Mining.There are various algorithms used to solve the problemof data mining. FCM (Fuzzy C mean) clusteringalgorithm and MFPCM (Modified Fuzzy Possibililstic Cmean) clustering algorithm are comparatively studied.The performance of Fuzzy C mean (FCM) clusteringalgorithm is analyzed and compared it with ModifiedFuzzy possibilistic C mean algorithm. Complexity ofFCM and MFPCM are measured for different data sets.FCM clustering technique is separated from ModifiedFuzzy Possibililstic C mean and that employsPossibililstic partitioning. The FCM employs fuzzyportioning such that a point can belong to all groupswith different membership grades between 0 and 1. Theauthor concludes that the Fuzzy clustering, whichconstitute the oldest component of soft computing. Thismethod of clustering is suitable for handling the issuesrelatedtounderstandabilityofpatterns;incomplete/noisy data, mixed media information andhuman interaction, and can provide approximatesolutions faster. The proposed approach for theunlabeled data clustering is presented in the followingsection.III.Methodology1) Fuzzy Possibilistic Clustering AlgorithmThe fuzzified version of the k-means algorithmis Fuzzy C- Figure 1. Objective Function Comparison for theProposed Technique and Existing TechniqueTable 1: Objective Function for Different ClusteringMethodsClustering MethodObjective Function Class 1 Class 2Class 3???? ???? ?? ?? ???? ?? ?? 2?? (?? ?? ??) + ?? ???? ?? ?? ???? ?? ?? 2?? ??(?? ?? , ?? ?? ))FPCM10.7611.1210.21??=1?? =1MFPCM PCFPCM10.66 10.2311.01 10.6710.11 9.96?1 2?? ?? ? ???? ??,?? ?? ?? ln ?? ?? + ?? ??,?? ??=1 ??=1 ?? ln ?? ?? ?21)+1 2?? ?? ? ???? ??,?? ?? ?? tanh ?? ?? + ?? ??,?? ?? tanh ?? ?? ? ??=1 ??=1 March 2011©2011 Global Journals Inc. (US) March 2011This page is intentionally left blank * WAC '06, Publication Year: 2006 * Hyperspherical possibilistic fuzzy c-means for highdimensional data clustering YangYan LihuiChen ICICS 2009. 7th International Conference on Information 2009. 2009 * LongYu * Robust Interval Type-2 Possibilistic C-means Clustering and its Application for Fuzzy Modeling JianXiao GaoZheng FSKD '09. Sixth International Conference on Fuzzy Systems and Knowledge Discovery 2009. 2009 4 * Similarity Based Fuzzy and Possibilistic c-means Algorithm ChunhuiZhang YimingZhou TrevorMartin Proceedings of the 11th Joint Conference on Information Sciences the 11th Joint Conference on Information Sciences 2008 * Pattern Recognition with Fuzzy Objective Function Algorithms JCBezdek 1981 Plenum Press New York * Pattern Recognition with Fuzzy Objective Function Algorithms JCBezdek 1981 New York, Plenum * Comments on A possibilistic approach to clustering MBarni VCappellini AMecocci IEEE Trans on Fuzzy Systems 4 1996 * Intelligent Data Analysis MRBerthold DJHand 1999 Springer-Verlag Berlin, Germany * Survey of Text Mining MWBerry 2003 Springer-Verlag New York, NY, USA * A mixed cmeans clustering model NRPal KPal JCBezdek Proceedings of the Sixth IEEE International Conference on Fuzzy Systems the Sixth IEEE International Conference on Fuzzy Systems Jul. 1997 1 * A cluster validity index for fuzzy clustering KLung Pattern Recognition Letters 25 2005 * UMFayyad GPiatetsky-Shapiro PSmyth RUthurusamy Advances in Knowledge Discovery and Data Mining Menlo Park and Cambridge, MA, USA AAAI Press and MIT Press 1996 * On a class of fuzzy classi4cation maximum likelihood procedures MSYang Fuzzy Sets and Systems 57 1993 * On parameter estimation for normal mixtures based on fuzzy clustering algorithms MSYang CFSu Fuzzy Sets and Systems 68 1994 * Fuzzy clustering using a compensated fuzzy hopfield network JSLin Neural Process. Lett 10 1999 * Parameter selection for suppressed fuzzy c-means with an application to MRI segmentation WLHung DYang Chen Pattern Recognition Letters 2005 * An Efficient Fuy Possibilistic C-Means with Penalied and Compensated Constraints Global Journal of Computer Science and Technology Volume XI Issue III Version I 21 March 2011 * Possibilistic Fuzzy c-Means Clustering Model Using Kernel Methods Xiao-HongWu Jian-JiangZhou International Conference on Intelligent Agents, Web Technologies and Internet Commerce 2005 2 * An Improvement to the Possibilistic Fuzzy c-Means Clustering Algorithm BOjeda-Magafia RRuelas MACorona-Nakamura DAndina Automation Congress, Algorithms in Data Mining September 2010 1 IJCST