# Introduction he digital imaging revolution in the medical domain over the past three decades has changed the way that the present-day physicians diagnose and treat diseases. These images of various modalities are playing an important role in detecting the anatomical and functional information about different body parts for the diagnosis, medical research, and education. Modern medical information systems need to handle these valuable resources effectively and efficiently. Currently, the utilization of medical images is limited due to the lack of effective search methods; text-based searches have been the dominating approach for medical image database management [1,2]. Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content based visual information retrieval (CBVIR) is the application of computer vision to the image retrieval problem. Research in content-based image retrieval (CBIR) today is an extremely active discipline. There are already review articles containing references to a large number of systems and description of the technology implemented [1,2]. A more recent review [3] reports a tremendous growth in publications on this topic. Applications of CBIR systems to medical domains already exist [4], although most of the systems currently available are based on radiological images. When referred to text retrieval based on keyword, current technique has already met the demand of the users expectation. As to the demand of digital multimedia retrieval in the network, for instance image, video and many other digital multimedia retrieval. Nowadays there also have some digital multimedia format's search engines which are special for image and mp3 format, for example Google image search engine, but the retrieval methods of these search engines still use the techniques based on text and keywords match. The goal is to retrieve the images based on visual features such as color, texture and shape. Formerly developed commercial content-based image retrieval systems characterized images by global features such as color histogram, texture values and shape parameters, however, for medical images, the systems using global image features failed to capture the relevant information [5]. Color is one of the most widely used low-level visual features and is invariant to image size and orientation [6]. Various texture representations have been investigated in pattern recognition and computer vision. Basically, texture representation methods can be classified into two categories: structural and statistical. Structural methods, including morphological operator and adjacency graph, describe texture by identifying structural primitives and their placement rules [7]. They tend to be most effective when applied to textures that are very regular. Statistical methods, including Fourier power spectra, co-occurrence matrices, Zernike moments, shift-invariant principal component analysis (SPCA), Markov random field, fractal model, and multiresolution filtering techniques such as Gabor and wavelet transform, characterize texture by the statistical distribution of the image intensity [8]. Shape has been one of the most important and effective low level visual features in characterizing much pathology identified by medical experts [9]. The use of shape as a feature is less developed than the use of color or texture, mainly because of the inherent complexity of representing it [10]. Yet, retrieval by shape has the potential of being the most effective search technique in many application fields [11]. Support Vector Machines (SVM) [12] is an approximate implementation of the structural risk minimization (SRM) principle. It creates a classifier with minimized Vapnik-Chervonenkis (VC) dimension. SVM minimizes an upper bound on the generalization error rate. The SVM can provide a good generalization performance on pattern classification problems without incorporating problem domain knowledge. One key design task, when constructing image databases is the creation of an elective relevance feedback component. While it is sometimes possible to arrange images within an image database by creating a hierarchy, or by handlabeling each image with descriptive words, it is of-ten time-consuming, costly and subjective. Alternatively, requiring the end-user to specify an image query in terms of low level features (such as color and spatial relationships) is challenging to the end-user, because an image query is hard to articulate, and articulation can again be subjective. Recent trials for content-based medical image retrieval were ASSERT system [13] for high resolution computed tomography (CT) images of the lung and image retrieval for medical applications (IRMA) system [14] for the classification of images into anatomical areas, modalities and viewpoints. Flexible image retrieval engine (FIRE) system handles different kinds of medical data as well as non-medical data like photographic databases [15]. The rest of the paper is organized as follows. In Section II, the proposed image retrieval method steps fuzzy texton detection, shear wavelet transform and support vector machines (SVM) for relevance feedback and texels calculation are discussed. Section III, Experiments, performance evaluations, and discussions are given. # II. # Proposed Method An efficient image retrieval technique is required to improve the success rate with the rapid increase in the usage of digital media. The block diagram of the proposed CBIR system is shown in Figure 1. # a) Extraction of Fuzzy Texton Images Fuzzy Texton is a way to describe the texture property further better than the Texton since Fuzzy texton uses the fuzzy texture unit (FTU) instead of texture unit (TU). By including fuzzy concept to the texton will generates all ranges of the values including '2' in the FTU [16,17]. Then, it helps to generates different textures are 5 8 instead of 4 8 . The main idea of the proposed method (TFT) is, Texton images are obtained by applying the 12 types of 3 x3 texton templates [18]. As per Julesz description a texton is a pattern which is shared by an image as a common property [19]. Textures which can be decomposed into elementary units, the texton are formed only if the adjacent elements lie within the neighborhood. The critical distances between texture elements which depend on the texture element size are used to incline the texton. Textons are classes of colors, elongated blobs of specific width, orientation, aspect ratios and terminators of elongated blobs. If texture elements are greatly expanded in one orientation, discrimination reduces. If the elongated elements are not jittered in orientation, texture gradients increase at boundaries. Thus with a small sub image of size 3 x 3 is used to obtain texton gradient. In the previous proposed technique TFT [18], we have proposed 12 textons of 3 x 3 grids. The computational complexity for using the # Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) overlapped components of 12 textons is also less to obtain final texton image. The basic unit of the method is defined by a central pixel and its eight neighbors, forming a 3x3 pixel square. This minimal square image has the local texture information of the central pixel in all the directions. In our case the size of the neighborhood is 3x3 pixels. This pattern of the image, consisting by 9 pixels, is denoted by a set V of nine elements, V= {V 0 , V 1 , V 2 ? V 8 }, where V 0 represents the intensity value of the central pixel and V i (1?i?8) the intensity value of each neighboring pixel. The smallest complete unit which best characterizes the local texture aspect of a given pixel and its neighborhoods, in all eight directions of a square raster, is Texture Unit (TU) that is defined, by TU={E 1 , E 2 , ?,E 8 }, where: E i = ? ? ? ? ? 0 if V i < V 0 and V i < ?? 1 if V i < V 0 and V i > ?? 2 if V i = V 0 ; 3 if V i > V 0 and V i < ?? 4 if V i > V 0 and V i > ?? 1 ? i ? 8 (1) here p and q are user defined values and each element E i occupies the same position as pixel i. An example is shown in Figure 1. As to each element of the TU can be assigned one of three possible values 0, 1,2,3 or 4, the total number of Texture Units is 5 8 = 16777216. These Units can be labeled and ordered in different ways; here we will label each TU as a 5-base number, named Texture Unit Number, N TU according to next formula: N TU = ? E i ? 5 i?1 2 8 i=1 (2) Where the position of the Texture Unit box and E i is the value of the box (0, 1, 2, 3 or 4). Moreover, the 8 elements can be ordered differently. If they are ordered clockwise, as shown in Figure 1(c), the first element can take eight possible positions, from the top left a to the middle left h, and then the 16777216 texture units can be labeled by the abode formula under eight different ordering ways (from a to h). A more detail study of texture unit indicates that the absent TU's involve two's in their texture unit. This is the case when neighbors and central pixels have the same values. If there is a lack of two's then TU will take only 0,1,3 and 4 which means that the possible real number of different textures are 4 8 instead of 5 8 , that is 65536 and 390625 the spectrum will be never totally covered, thus the power of the method is misused. It impacts on texture Unit number also. To overcome this, fuzzy texture is used in the proposed method. Fuzzy Texton is a way to describe the texture property further better than Texton. Fuzzy Texton is a way to describe the texture property further better than the Texton since Fuzzy texton uses the fuzzy texture unit (FTU) instead of texture unit (TU). By including fuzzy concept to the texton will generates all ranges of the values including '2' in the FTU [20]. Then, it helps to generates different textures are 5 8 instead of 4 8 . It covers total spectrum which is not possible by texton only (means without fuzzy) [21]. The main idea of the proposed method (TFT) is, Texton images are obtained by applying the 12 types of 3 x3 texton templates [22] as shown in figure 4 on HSV planes as shown in figure 3. We used the Fuzzy Texture unit boxes (FTUB) and Fuzzy Texture Unit Numbers (FTUN) which are used by Aina Barceló. et. al [20]. In this proposed work, FTUB and FTUN are used during the period of quantifying the texton images. Then, the resultant textons are called as fuzzy texton images. Using fuzzy techniques provide a more flexible way of assigning values (E i ) to the TU boxes. From now on E i will not be a unique value 0, 1, 2, 3 or 4, but it will have the five values associated at the same time, each one to its own degree. Each particular degree will be calculated with the aid of a membership function that has to be defined. Therefore, we will consider Fuzzy Texture Unit Boxes (FTUB) FE i that are defined as follows: ???? ?? = {?? 0 (?? ?? ), ?? 1 (?? ?? ), ?? 2 (?? ?? ), ?? 3 (?? ?? ), ?? 4 (?? ?? )}, 1 ? ?? ? 8(3) Where in ? 0 (V i ),? 1 (V i ),? 2 ( Vi ), ? 3 (V i ) and ? 4 (V i ) are the membership degrees of V i [20] to the fuzzy sets 0, 1, 2, 3 and 4 respectively. In a similar way to the TU, the Fuzzy Texture Unit (FTU) is defined by: Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) ?????? = {???? 1 , ???? 2 , ???? 3 , ???? 4 , ???? 5 , ???? 6 , ???? 7 , ???? 8 } (4) As part of the detection process of fuzzy texton, our 3 x 3 grids can detects the textons in all directions and also corners of the textures. If three pixels are highlighted and have the same value then grid will form a fuzzy texton as shown in Figure 5. # b) Edge Orientation using Discrete Shearlet Transform The approach used in this paper is based on a new multiscale transform called the shearlet transform. It is multidimensional version of the traditional wavelet transform, and is especially designed to address anisotropic and directional information at various scales. Indeed, the traditional wavelet approach, which is based on isotropic dilations, has a very limited capability to account for the geometry of multidimensional functions. In contrast, the analyzing functions associated to the shearlet transform are highly anisotropic, and, unlike traditional wavelets, are defined at various scales, locations and orientations. As a consequence, this transform provides an optimally efficient representation of images with edges [23]. The shearlet transform has similarities to the curvelet transform, Shearlets and curvelets, in fact, are the only two systems which were mathematically known to provide optimally sparse representations of images with edges and the implementations of the curvelet transform correspond to essentially the same tiling as that of the shearlet transform. Both systems are related to contourlets [26], [27] and steerable filters [28], [29]. We refer to [30] for more details about the comparison of shearlet and other orientable multiscale transforms. In this paper, we combine the shearlet framework with several well established ideas from the image processing literature to obtain improved and computationally efficient algorithms for edge analysis and detection. Our approach may be viewed as a truly multidimensional refinement of the approach of Mallat et al., where the isotropic wavelet transform is replaced by an anisotropic directional multiscale transform. As a result, the shearlet transform acts as a multiscale directional difference operator and provides a number of very useful features: Improved accuracy in the detection of edge orientation. Using anisotropic dilations and multiple orientations, the shearlet transform precisely captures the geometry of edges. It is a multiscale transform, based on the same affine mathematical structure of traditional wavelets. The discretization of the shearlet transform provides a stable and computationally efficient decomposition and reconstruction algorithm for images. An algorithm for edge detection based on shearlets was introduced in [24,25], where a discrete shearlet transform was described with properties specifically designed for this task. In fact, the discrete shearlet transform which was presented above for image denoising, produces large side lobes around prominent edges which interfere with the detection of the edge location. By contrast, the special discrete shearlet transform introduced in [24,25] is not affected by this issue since the analysis filters are chosen to be consistent with the theoretical results in [31,32], which require that the shearlet generating function ? satisfies certain specific symmetry properties in the Fourier domain. The first step of the shearlet edge detector algorithm consists in selecting the edge point candidates of a digital image u[m1,m2]. They are identified as those points (?? 1 ????, ?? 2 ????) which, at fine scales j, are local maxima of the function ?? ?? ??[?? 1 , ?? 2 ] 2 = ? ???????[??, ??, ?? 1 , ?? 2 ] ? 2 ?? (5) Here ??????[??, ??, ?? 1 , ?? 2 ] denotes the discrete shearlet transform. According to the properties of the continuous shearlet transform summarized above, we expect that, if (?? 1 ????, ?? 2 ????)is an edge point, the discrete shearlet transform of u will behave as |??????[??, ??, ?? 1 ????, ?? 2 ????]|~??2 ????? (6) Where ? ? 0. If, however, ? < 0 (in which case the size of |SHu| increases at finer scales), then (?? 1 ????, ?? 2 ????)will be recognized as a spike singularity and the point will be classified as noise. Using this procedure, edge point candidates for each of the oriented components are found by identifying the points for which ? ? 0. Next, a non-maximal suppression routine is applied to these points to trace along the edge in the edge direction and suppress any pixel value that is not considered to be an edge. At each edge point candidate, the magnitude of the shearlet transform is compared with the values of its neighbors along the gradient direction (this is obtained from the orientation map of the shearlet decomposition). If the magnitude is smaller, the point is discarded; if it is the largest, it is kept. Extensive numerical experiments have shown that the shearlet edge detector is very competitive against other classical or state-of-the-art edge detectors, and its performance is very robust in the presence of noise. An # Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) example is displayed in Figure 12, where the shearlet edge detector is compared against the wavelet edge detector (which is essentially equivalent to the Canny edge detector) and the Sobel and Prewitt edge detectors. Notice that both the Sobel and Prewitt filters are 2D discrete approximations of the gradient operator. The performance of the edge detectors is assessed using the Pratt's Figure of Merit, which is a fidelity function ranging from 0 to 1, where 1 is a perfect edge detector. This is defined as ?????? = 1 ?????? (?? ?? ,?? ?? ) ? 1 1+???? (??) 2 ?? ?? ??=1 (7) where ?? ?? is the number of actual edge points, ?? ?? is the number of detected edge points, d(k) is the distance from the k-th actual edge point to the detected edge point and ? is a scaling constant typically set to 1/9. The numerical test reported in the figures show that the shearlet edge detector consistently yields the best value for FOM. These properties lead directly to a very effective algorithm for the estimation of the edge orientation, which was originally introduced in [25]. Specifically, by taking advantage of the parameter associated with the orientation variable in the shearlet transform, the edge orientations of an image u, can be estimated by searching for the value of the shearing variable s which maximizes ???? ? ??(??, ??, ??) at an edge point p, when a is sufficiently small. Discretely, this is obtained by fixing a sufficiently fine scales (i.e.,?? = 2 ?2?? sufficiently "small") and computing the index ?? ?which maximizes the magnitude of the discrete shearlet transform ??????[??, ??, ??] as ?? ?(??, ??) = ???????????? ?? |??????[??, ??, ??]|(8) Once this is found, the corresponding angle of orientation ?? ?? ?(??, ??) associated with the index ?? ?(??, ??)can be easily computed. As illustrated in [25], this approach leads to a very accurate and robust estimation for the local orientation of the edge curves. To illustrate the general principle, consider the simple image in Figure 13 consisting of large smooth regions separated by piecewise smooth curves. The junction point A, where three edges intersect, is certainly the most prominent object in the image, and this can be easily identified by looking at values of the shearlet transform. In fact, if one examines the discrete shearlet transform ??????[?? 0 , ??, ?? 0 ], at a fixed (fine) scale ?? 0 and locations?? 0 , as a function of the shearing parameter l, the plot immediately identifies the local geometric properties of the image. Specifically, as illustrated in Figure 13(b), one can recognize the following four classes of points inside the image. At the junction point ?? 0 = ??, the function |??????[?? 0 , ??, ?? 0 ]| exhibits three peaks corresponding to the orientations of the three edge segments converging into A; at the point ?? 0 = ??, located on a smooth edge, |??????[?? 0 , ??, ?? 0 ]| has a single peak; at a point ?? 0 = ??, inside a smooth region, |??????[?? 0 , ??, ?? 0 ]| is essentially flat; finally, at a point ?? 0 = ??"close" to an edge,|??????[?? 0 , ??, ?? 0 ]| exhibit two peaks, but they are much smaller in amplitude than those for the points A and B. A similar behaviour was observed, as expected, for more general images, even in the presence of noise. Based on these observations, a simple and effective algorithm for classifying smooth regions, edges, corners and junction points of an image was proposed and validated in [52]. # c) Support Vector Machine for Relevance Feedback Consider the binary classification problem {(xi, yi)}, for i=1 to N, where xi are the labeled patterns and yi? {?1, +1} the corresponding labels. Based on this training set, we want to train an SVM classifier. The SVM classifier maps the patterns to a new space, called kernel space, using a transformation x ? ?(x), in order to get a potentially better representation of them. This new space can be nonlinear and of much higher dimension than the initial one. After the mapping, a linear decision boundary is computed in the kernel space. In the context of SVM methodology, the problem of classification is addressed by maximizing the margin, which is defined as the smallest distance, in the kernel space, between the decision boundary and any of the training patterns. This can be achieved by solving the following quadratic programming problem: ?????? ?? ?? ?? ?? ??=1 ? 1 2 ? ? ?? ?? ?? ?? =1 ?? ??=1 ?? ?? ?? ?? ?? ?? ????? ?? , ?? ?? ?? over ?? ?? , i=1,?,N Such that 0 ? ?? ?? ? ?? and ? ?? ?? ?? ??=1 ?? ?? = 0(10) Where ????? ?? , ?? ?? ? = ??(?? ?? ) ?? ????? ?? ?(11) is the kernel function and C is a parameter controlling the trade-off between training error and model complexity. The most popular non-linear kernel functions used for SVMs belong to the class of Radial Basis Functions (RBFs). From all RBF functions, the most commonly used is the Gaussian RBF, which is defined by: ????? ?? , ?? ?? ? = ?????? ??????? ?? ? ?? ?? ? 2 ? After the training of the classifier, the value of the decision function for a new pattern x is computed by: ??(??) = ? ?? ?? ?? ??=1 ?? ?? ??(?? ?? , ??) + ?? Where b is a bias parameter the value of which can be easily determined after the solution of the Year ( ) optimization problem [33]. The value |y(x)| is proportional to the distance of the input pattern x from the decision boundary. Thus, the value y(x) can be regarded as a measure of confidence about the class of x, with large positive values (small negative values) strongly indicating that x belongs to the class denoted by +1 (?1). On the contrary, values of y(x) around zero provide little information about the class of x. # d) Extraction of Texels After the extraction of fuzzy texton images need to extract the texels from them. The local properties [18] used to extract the feature vectors used here come under two categories: one is regarding color information and other is about texture. Some of the important features of texture properties (cluster properties) are Local Homogeneity, Cluster Shade and Cluster Prominence. ii) Color Variance ?? ?? = ( 1 ?? ? (?? ???? ?? ?? ) 2 ?? ?? =1 ) 1 2(17b) iii) Skewness ?? ?? = ( 1 ?? ? (?? ???? ?? ?? ) 3 ?? ?? =1 )1 3 (18) In the framework of CBIR with RF, in each round of RF we have to solve a classification problem as the one described above, where a number of images, represented as feature vectors, correspond to the feedback examples provided by the user so far, and each image is labeled by ?1 or +1 corresponding to irrelevant or relevant, respectively. The initial query is considered to be one of the relevant images and is labeled by +1. From the above, it is obvious that we can train an SVM classifier based on the feedback examples and use it to distinguish between the classes of relevant and irrelevant images. Each image in the database will be presented to the trained classifier and the value of the decision function (Eq. ( 5)) will be used as the ranking criterion. The higher the value of the decision function for an image, the more relevant this image is considered by the system. III. # Results and Discussion The effectiveness of the proposed retrieval system is evaluated on fundus image database, Skin cancer image database and Endoscopy image database. The sample images from the fundus image database are shown in the figure 5. We can use different distance metrics for matching such as an N-dimensional feature vector F = [F1, F2...FN]. It is extracted from every image of database and stored in database. Let Q = [Q1, Q2, Q3,?,QN] be the feature vector of query image. A simple distance measure [34] whose time complexity is very less when compared with others like Euclidean (no square or square root operations) when we consider large databases, is given by (F, Q) = ? |F i ?Q i | 1+F i +Q i N i=0(19) # b) Performance Measure Most common measurements are used to evaluate the performance of image retrieval methods are Precision, Recall and Accuracy curves [35]. Precision ??(??) = ?? ?? ?? (20) Recall ??(??) = ?? ?? ?? (21) Accuracy ??(??) = (??(??)+??(??)) 2 (22) Where I N is the number of images retrieved in the top N positions that are similar to the query image and M is the total number of images in the database similar to the query image. In all cases the framework provides higher precision, recall and accuracy. # Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) 1![Figure 1 : The cbir system It consists of two phases: database building (off-line) and query processing (on-line) phase.](image-2.png "Figure 1 :") 2![Figure 2 : (a) Hue levels of an image part. (b) Texture Unit associated to the central pixel. (c) Texture Unit Ordering](image-3.png "Figure 2 :") 3![Figure 3 : 12 Types of 3 x 3 texton templates](image-4.png "Figure 3 :") 4![Figure 4 : Fuzzy Texton Detection Process](image-5.png "Figure 4 :") 1![i) Local homogeneity ? 1 1+(????? ) 2 ??(??, ??) ?? ??,?? =0 (14) ii) Cluster shade ? ??? ? ?? ?? + ?? ? ?? ?? ? 3 ??(??, ??) ?? ??,?? =0 (15) iii) Cluster Prominence ? (?? ? ?? ?? + ?? ? ?? ?? ) 4 ??(??, ??) ?? ??,?? =0 (16) where ?? ?? = ? ????(??, ??) ?? ??,?? =0 and ?? ?? = ? ????(??, ??) ?? ??,?? =0 There are three important properties regarding color information. They are Color Expectancy, Color Variance and Skewness. i) Color Expectancy ?? ?? =](image-6.png "1 ??") 5![Figure 5 : Fundus images The Skin cancer sample images from the database of Skin images are shown in figure 6.](image-7.png "Figure 5 :") 6![Figure 6 : Skin images The sample images from the Endoscopy image database are shown in the figure 7](image-8.png "Figure 6 :") 7![Figure 7 : Endoscopy images a) Distance MeasureWe can use different distance metrics for matching such as an N-dimensional feature vector F = [F1, F2...FN]. It is extracted from every image of database and stored in database. Let Q = [Q1, Q2, Q3,?,QN] be the feature vector of query image. A simple distance measure[34] whose time complexity is very less when compared with others like Euclidean (no square or square root operations) when we consider large databases, is given by](image-9.png "Figure 7 :") © 2015 Global Journals Inc. (US) © 2015 Global Journals Inc. (US) 1 * Image retrieval: Current techniques, prominsigndirections, and open issues YRui TSHuang SFChang Journal of Visual Communication and Image Representation 10 1999 * Content-basedimage retrieval at the end of the early years AW MSmeulders MWorring SSantini AGupta RJain IEEE Transactions on Pattern Analysisand Machine Intelligence 22 12 2000 * Image retrieval: Ideas, influences, andtrends of the new age RDatta DJoshi JLi JZWang ACM Computing Surveys 40 2 60 April 2008 * A review of content-basedimage retrieval systems in medical applications -clinical benefits and future directions HM¨uller NMichoux DBandon AGeissbuhler International Journal of Medical Informatics 73 2004 * Content-based retrieval from medical image databases: a synergy of human interaction, machine learning and computer vision CBrodley AKak CShyu JDy LBroderick AMAisen Proceedings of the 16th National Conference on Artificial Intelligence and the 11th Innovative Applications of Artificial Intelligence Conference Innovative Applications of Artificial Intelligence the 16th National Conference on Artificial Intelligence and the 11th Innovative Applications of Artificial Intelligence Conference Innovative Applications of Artificial Intelligence 1999 * Content Based Image Retrievalusing Dominant Color, Texture And Shape MRao DrBRao DrAGovardhan International Journal of Engineering Science and Technology IJEST * Pic To Seek : Combing color and shape invariant features for image retrieval GeverstSmeulders AW IEEE Trans. Image Processing 91 Jan 2000 * Invariant image recognition by Zernike moments AKhotanzad YHHongs IEEE Trans. on PatternAnalysis and Machine Intelligence 12 5 1990 * Integrated active contours for texture segmentation CSagiv NASochen YYZeevi IEEETransactions on Image Processing 16 6 2006 * Intelligent Shape Feature Extraction and Indexing for EfficientContent-Based Medical Image Retrieval APhillip NikolayMMisna Sirakov Proc. of IEEE Computer Based Medical Systems of IEEE Computer Based Medical SystemsHouston, TX June 23-24. 2004 * Content-based Image Retrieval Using Fourier Descriptors on a Logo Database AndreFolkers Hanansamet Proc of the 16th Int. Conf. on Pattern Recognition of the 16th Int. Conf. on Pattern RecognitionQuebec City, Canada August 2002 III * The Nature of Statistical Learning Theory VVapnik 1995 Springer-Verlag New York * ASSERT -A physician-in-theloop contentbased retrieval system for HRCT image databases CRShyu CEBrodley ACKak AKosaka AMAisen LSBroderick Computer Vision and Image Understanding 75 1/2 1999 * Statistical framework for modelbased image retrieval in medical applications DKeysers JDahmen HNey BBWein TMLehmann J Electron Imaging 12 2003 * Supervised Content based image Retrieval using Fuzzy Texton and Shearlet Transform 15. Deselaers T. Features for Image Retrieva * GermanyAachen Rheinisch-WestfalischeTechnischeHochschule Aachen 2003 * View at Publisher ? View at Google Scholar ? View at Scopus YLChang XLi IEEE Transactions on Image Processing 3 6 1994 Adaptive image regiongrowing * Modular Concept and Method For Knowledge Based Recognition of Complex Objects in CAQ Applications ANabout Series 20 92 1993 VDI Publisher * Techniques for the digital computer analysis of chain-encoded arbitrary plane curves HFreeman Proceedings of National Electronics Conference National Electronics Conference 1961 17 * Pitas Digital Image Processing New York, NY, USA John Wiley & Sons 2000 * Image retrieval: current techniques, promising directions, and open issues YRui TSHuang S.-FChang 10.1109/83.336259 Journal of Visual Communication and Image Representation 10 1 1999 * VisualSEEk: a fully automated content-based image query system JRSmith S.-FChang Proceedings of the 4thACM International Conference on Multimedia (MULTIMEDIA'96) the 4thACM International Conference on Multimedia (MULTIMEDIA'96)Boston, Mass, USA November 1996 * Content-Based Image Indexing and Retrieval, Handbook of Multimedia Computing WYMa HJZhang 1999 CRC Press Boca Raton, Fla, USA * Optimally sparse multidimensional representation using shearlets KGuo DLabate SIAM J. Math. Anal 9 2007 * Edge detection and processing using shearlets SYi DLabate GREasley HKrim Proc. IEEE Int. Conference on Image Proc IEEE Int. Conference on ImageSan Diego October 12-15, 2008 * A Shearlet approach to edge analysis and detection SYi DLabate GREasley HKrim IEEE Trans. Image Proc 18 5 2009 * The contourlet transform:an efficient directional multiresolution image representation MNDo MVetterli IEEE Trans. Image Process 14 12 Dec. 2005 * Directional multiscale modeling ofimages using the contourlet transform DDPo MNDo IEEE Trans. ImageProcess 15 6 June 2006 * The design and use of steerable filters WFreeman EAdelson IEEE Trans. Patt. Anal. and Machine Intell 13 1991 * The steerable pyramid: A flexible architecture for multi-scale derivative computation ESimoncelli WFreeman Proc. IEEE ICIP IEEE ICIPWashington, DC 1995 * Color object detection using spatial-color joint probability functions JLuo DCrandall IEEE Transactions on Image Processing 15 6 2006 * Characterization and analysis of edges using the continuous shearlet transform KGuo DLabate SIAM J. Imaging Sciences 2 2009 * Edge analysis and identification using the continuous shearlet transform KGuo DLabate WLim Appl. Comput. Harmon. Anal 27 2009 * View at Publisher ? View at Google Scholar ? View at Scopus DZhang GLu Pattern Recognition 37 1 2004 Review of shape representation and description techniques * Color texture classification by integrative cooccurrence matrices CPalm Pattern Recognition 37 5 2004 * Image retrieval based on the texton co-occurrence matrix G-HLiu J-Y.Yang Pattern Recognition 41 12 2008