# Introduction mage segmentation is a fundamental task in many applications. Among various techniques, the active contour model is widely used. A contour is evolved by minimizing certain energies to match the object boundary while preserving the smoothness of the contour [2]. The active contour is usually represented by landmarks [18] or level sets [20,8]. A variety of image features have been used to guide the active contour, typically including image gradient [7,31], region statistics [34,8], color and texture [14]. In real purposes, the presentation of the active contour model is prone to be dishonored by missing or misleading features. For example, segmentation of the left ventricle in ultrasound images is still an unresolved problem due to the characteristic artifacts in ultrasound such as attenuation, speckle and signal dropout [23]. To improve the robustness of active contours, the shape prior is often used. The prior knowledge of the shape to be segmented is modeled based on a set of manuallyannotated shapes to guide the segmentation. Previous deformable template models [32,27,17,21] can be regarded as the early efforts towards knowledge-based segmentation. In more recent works, the shape prior was applied by regularizing the distance from the active contour to the template in a level-set framework [10,24,9]. Another category of methods popularly used for shape prior modeling is the active shape model or point distribution model [11]. Briefly speaking, each shape is denoted by a vector and regarded as a point in the shape space. Then, the principal component analysis is carried out to obtain the mean and several most significant modes of shape variations, which establish a low-dimensional space to describe the favorable shapes. During the segmentation of a new image, the candidate shape is constrained in the shape space [19,29]. Also, dynamic models can be integrated to model the temporal continuity when tracking an object in a sequence [12,35]. Other extensions of the active shape model include manifold learning [15] and sparse representation [3:5], to name a few. While the shape prior has proven to be a powerful tool in segmentation, it has two limitations: 1. Previous methods for shape prior modeling require a large set of annotated data, which is not always accessible in practice. 4. We applied the proposed method to sequence of surveillance face images and demonstrated that the Diminutive sequence optimality regularization could significantly improve the robustness of the active contour model. The rest of this paper is organized as follows: Section 2 introduces the basic theory and the formulation of our method. Section 3 describes the algorithm to solve our model. Section 4 demonstrates the merits of our method by experiments. Finally, Section 5 concludes the paper with some discussions. # II. # Formulation a) Diminutive Sequence Optimality Measure To apply a Diminutive sequence optimality constraint to active contours, a proper measure to estimate that any of images are akin to source is desired. Characteristically, the akin among two contours is measured by scheming the distances between the equivalent points on the contours, and the minuscule sequence optimality can be calculated by the sum of pair-wise distances among contours. The main drawback of this technique is that the contour distance is not invariant below akin transformation. Here, we propose to use the matrix rank to measure the Diminutive sequence optimality of shapes. Suppose each shape is represented by a vector. Multiple shapes form a matrix. Intuitively, the rank of the matrix measures the correlation among the shapes. For example, the rank equals to 1 if the shapes are identical, and the rank may increase if some shapes change. Moreover, we can show that the shape matrix is still lowrank if the shape change is due to the akin transformation such as translation, scaling and rotation. For example, let vector p n n C C C × ? ? has the following property 1 ([ ,..., ]) 6 n rank C C ? (1) Intrinsically, the rank of the shape matrix describes the degree of freedom of the shape change. The low-rank constraint will allow the global change of contours such as translation, scaling, rotation and principal deformation to fit the image data while truncating the local variation caused by image defects. segment the object in these images. To keep the contours similar to each other, we propose to segment the images by min 1 ( ) n X i i i f C = ? Subject to ( ) , rank X K ? (2) Where 1 [ ,..., ] n X C C = and K is a predefined constant. ( ) i i f C Is the energy of an active contour model to evolve the contour in each frame, such as snake [18], geodesic active contour [7], and regionbased models [34,8]. For example, the region-based energy in [8] reads 1 2 2 2 1 2 ( ) ( ( ) ) ( ( ) ) ( ) i i i i i f C I X u dx I X u dx length C ? ? ? = ? + ? + * ? ?(3) Where 1 ? and # 2 ? represent the regions inside and outside the contour, and 1 u and 2 u denote the mean intensity of 1 ? and 2 ? , respectively. Since rank is a discrete operator which is both difficult to optimize and too rigid as a regularization method, we propose to use the following relaxed form as the objective function: min 1 ( ) n X i i i f C X ? * = + ? (4) Here, rank(X) in ( 2) is replaced by the central median X * , i.e. the sum of singular values of X. Recently, the central median minimization has been widely used in low-rank modeling such as matrix completion [6 ] and robust principal component analysis [5 ]. As a tight convex surrogate to the rank operator [16 ], the central median has several good properties: Firstly, the convexity of the central median makes it possible to develop fast and convergent algorithms in optimization. Secondly, the central median is a continuous function, which is important for a good process of regularize in many applications. For instance, in our problem, the small perturbation in the shapes may result in a large increase of rank(X), while X * may rarely change. # Algorithm In this section, we will discuss how to solve the optimization problem observed in (Eq4). If regularizing process not opted X * , (Eq4) can be locally minimized by changeover descent, which gives the curve evolution steps in typical active contour models. In our model, it is difficult to apply changeover descent directly due to the central median, which is coarse and its partial changeover is hard to compute. ( ) ( ) X F X R X ? + (5) Where ( ) F X a differentiable is function and ( ) R X corresponds to a convex penalty which can be coarse. Our problem is in this category with 1 ( ) ( ) n i i i F X f C = = ? and ( ) R X X * = .The basic step in Proximal Gradient is to make the following quadratic approximation to F(X) based on the previous estimate ' X per iteration. Add Eq 6 2 2 ( , ') ( ') ( '), ' ' ( ), 2 1 [ ' ( ')] ( ) 2 F F Q X X F X F X X X X X R X X X F X R X const µ µ ? µ ? µ = + ?? ? ? + ? + = ? ? ? + + ? ?(6) Where .,. ? ? means the inner product, . F denotes the Frobenius norm, and ? is a constant. It is shown in [22] that, if F(X) is differentiable with Lipschitz continuous gradient, the sequence generated by the following iteration will converge to a stationary point of the function in ( 5) with a convergence rate of 1 ( ) k ? . 1 2 min arg ( , ) 2 min 1 1 arg [ ( )] ( ) 2 2 k k k k F X Q X X X X F X R X µ ? µ µ + = = ? ? ? +(7) The next question is how to solve the update step in (Eq7). For our problem, the lemma proven in [4] has been taken to define the proposed hastened propinquity changeover algorithm. # Lemma 1 Given m n X × ? ? , the solution to the problem 2 * min 1 2 F X Z X X ? ? +(8) is given by * ( ) X D Z ? = , where min( , )1( ) ( ) m n T i i i i D Z u v ? ? ? = = ? + ?(9) The intuition of our algorithm is that, per iteration, we first evolve the active contours according to the image-based forces and then impose the Diminutive sequence optimality regularization via singular value threshold. The overall algorithm is summarized here. Hastened propinquity changeover algorithm 2. 0 fork = ? Maximum number of iterations do 3. 1 1 # ( ) k k k k k k t Y X X X t ? ? ? = + ? 4. For 1 i n = ? do 5. 1 ( ) k k k i i i i y y f y µ ? ? ? 6. end for 7. 1 ( ) k k X D Y ? µ + = 8. 2 1 1 1 4( ) 2 k k t t + + + = 9. If 1 k k X X # Performance Analysis and Results Exploration In this section, we evaluate the proposed method on both synthesized data and surveillance face image sequence. To demonstrate the advantages of the Diminutive sequence optimality constraint, we compare the results of the same active contour model before and after applying the proposed constraint. We select the region-based active contour in (3) as the basic model, which is less sensitive to initialization and has fewer parameters to tune compared with edge-based methods. In our execution, we initialize the energetic contours as 0 0,..., 0 [ ] X C C = , where 0 C is a coarse outline of the object placed manually in an image. Three parameters need to be selected in our algorithm. ? in (Eq3) controls the smoothness of each contour, ? in Recently, the Proximal Gradient (PG) method [1,22] is used to solve the following category of problems 1. The bottom row of figure 2 indicates the results obtained from different strategies. There are two comments worth mentioning. Firstly, the contour shapes are globally consistent with each other throughout the sequence, which is attributed to the Diminutive sequence optimality constraint. Hence, the contours are more resistant to local misleading features. Secondly, the constrained shape model is still flexible enough to adapt the deformation of the object shape. The problem of our method is that it cannot address the universal bias of the model. Therefore, the region-based active contours cannot attach closely to the true boundary. In practice, more appealing results can be obtained by including more energy terms such as edge-based energies, which is out of the scope of this paper. The results are summarized in Table 1. Regarding the mean of the metrics, a smaller MAD/HD or a larger Dice coefficient indicates a more accurate segmentation. Generally, the performance with the proposed constraint is better than that without the constraint. The improvement in the diminutive sequence trained distance is the most notable, which measures the largest error for each contour. This is due to the fact that part of the segmentation result is corrupted by the missing boundary while this error can be corrected by adding the shape constraint. Regarding the standard deviation of the metrics, a smaller standard deviation indicates the more stable performance. The standard deviation with the proposed constraint is distinctly lower than that without the constraint, which shows the significance of the proposed constraint to improve the robustness of the active contour model. In our experiments, we selected ? empirically and applied the same ? to all sequences. The curve in Figure 4 shows that the accuracy changes smoothly over ? and the performance is stable in a wide range. Another alternative way is to choose a constant K specifying the degree of freedom allowed for shape variation and then solve the model with a decreasing sequence of ? until ( ) rank X reaches K. # d) Convergence and Computational Time Our algorithm is executed in java and tested on a desktop through a Intel i7 3.4GHz CPU and 3GB RAM. The experiments showed that the algorithm with the shape constraint converged faster than that without shape constraint. This can be explained by the fact that the added constraint will make the active contour model better regularized, which results in faster convergence and fewer iterations. The results indicating that the algorithm with the proposed constraint is even faster in computation compared to that without the constraint. V. # Conclusion In this paper, we proposed a simple and effective way to regularize the Diminutive sequence optimality of shapes in the active contour model based on low-rank modeling and rank minimization. We use the position similarities to represent the contour instead of level sets. The reason is that the low-rank property in (Eq1) will not hold if the level-set representation is used. For instance, if there are n contours represented by the zero-level sets of n signed distance functions (SDFs) and the contours are identical in shape but different in location, the matrix consisting of the vector SDFs has a rank of n, which is full-rank. Other divergent methods for image segmentation also have this issue. A limitation of using the shape akin constraint is the possibility of removing frame-specific details of the shapes. The trade-off between noise removal and signal preserving is a fundamental challenge in many problems. A possible solution in our problem is to refine the segmentation by running an active contour model that is more sensitive to local features with our results being both initialization and templates to constrain the curve evolution. In future the formation and projection of the missing contour structure can be done by determining through support vector machines, which trained by the optimal contour features of the diminutive sequence. ![2013 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XIII Issue VIII Version I Function Given a diminutive sequence of images 1 ,..., n I I , we try to find a set of contours 1 ,..., n C C to III.](image-2.png "©") ![min ](image-3.png "") ![Eq4) controls the Diminutive sequence optimality of contours, and µ , in (Eq7) controls the step-length of curve evolution in each iteration. We choose the © 2013 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XIII Issue VIII Version I](image-4.png "(") ![use the same set of values for all experiments. a) Surveillance Captured Face Segmentation We apply our method to set of surveillance captured image sequence as shown in figure 1. The face recognition from surveillance image frames is a very challenging problem due to various misleading features in surveillance images. (b) Diminutive Sequence used for Training (c) Segmented the Image without Preprocessing (d) Segmented the iMage after Preprocess (e) Segmented the image under self Trained Projection (f) Segmented the image under diminutive sequence trained projection In figure 2, set of frames uniformly placed through the sequence are selected to demonstrate the results. For each panel, the top row and the bottom row present the results of region-based active contours without and with the proposed constraint, respectively.](image-5.png "") 2![Figure 2 : Contour Projection Accuracy Comparison b) Qualitative Comparison Uniformly-selected frames of two sequences are displayed in Figure 2 to qualitatively evaluate the segmentation. The results of the region-based active contour without the proposed constraint are given in the top rows. The results are corrupted in several images. Moreover, the active contour is prone to be trapped by the misleading features.The bottom row of figure2indicates the results obtained from different strategies. There are two comments worth mentioning. Firstly, the contour shapes are globally consistent with each other throughout the sequence, which is attributed to the Diminutive sequence optimality constraint. Hence, the contours are more resistant to local misleading features. Secondly, the constrained shape model is still flexible enough to adapt the deformation of the object shape. The problem of our method is that it cannot address the universal bias of the model. Therefore, the region-based active contours cannot attach closely to the true boundary. In practice, more appealing results can be obtained by including more energy terms such as edge-based energies, which is out of the scope of this paper.](image-6.png "Figure 2 :") ![c) Quantitative Evaluation We compared the variation in segments under different distances of raw image, preprocessed image, and self trained projection with diminutive sequence trained projection. The table 1 explores the performance advantage of diminutive sequence trained contour projection. (a) Input image Projecting Active Contours with Diminutive Sequence Optimality](image-7.png "F") 1![Figure 1: Example surveillance capture face image formation by projecting the missing active contours.](image-8.png "Figure 1 :") ![6. E.Candes and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717 772, 2009.](image-9.png "F") 1Projecting Active Contours with Diminutive Sequence OptimalitySegmenting-Without-Segmenting-After-Segmenting-with SelfSegmenting after TrainedDistancePreprocessingPreprocessingTrained-Projectionby Diminutive Sequence037.666737.666747.492144.8643137.666737.666745.035744.8643240.094137.666740.12347.2636340.094137.666772.055656.8605442.521637.666779.424659.2597542.521637.6667108.900880.85276 744.949 44.94937.6667 37.6667123.6389 123.638976.0543 90.44960138 9 1047.3765 47.3765 49.803937.6667 37.6667 40.0659123.6389 121.1825 131.0079116.8411 116.8411 95.2481Year 21149.803942.4651143.2897126.4381249.803942.4651153.1151143.23261352.231442.4651153.1151143.23261457.086344.8643145.746148.0311586.215747.2636148.2024148.0311691.070654.4612155.5714145.63181798.352954.4612165.3968145.631818108.062764.0581153.1151152.829519139.619683.2519158.0278150.430220146.90292.8488162.9405152.829521146.902107.2442155.5714140.833322146.902107.2442150.6587140.833323151.7569112.0426148.2024138.434124149.3294128.8372155.5714138.434125149.3294138.4341170.3095145.631826 27149.3294 144.4745140.8333 138.4341175.2222 175.2222150.4302 150.4302( D D D D D D D D ) F28142.0471136.0349170.3095148.03129146.902126.438170.3095145.631830146.902126.438140.8333138.434131134.7647121.6395128.5516152.829532132.3372109.6434116.2698157.627933129.9098109.6434103.9881157.627934134.7647112.042699.0754164.825635134.7647109.643499.0754169.62436137.1922112.042694.1627164.825637132.3372109.643489.25160.027138129.9098112.042691.7064157.627939129.9098112.042686.7937131.236440129.9098121.6395103.9881128.837241139.6196128.8372108.9008109.643442144.4745128.8372106.4444102.445743137.1922133.6357106.4444104.84544142.0471133.635796.619100.046545142.0471136.034989.2595.248146139.6196143.232684.337397.647347137.1922143.232679.4246112.042648110.4902145.631879.4246140.83334993.498143.232679.4246143.23265093.498124.038872.0556143.23265191.0706104.84557.3175140.83335291.070697.647357.3175138.43415388.643195.248157.3175133.63575488.643192.848837.6667114.44195586.215792.848837.6667107.2442© 2013 Global Journals Inc. (US) Fmodel better regularized and require minimal iteration to converge. © 2013 Global Journals Inc. (US) Global Journal of Computer Science and Technology * A fast iterative shrinkagethresholding algorithm for linear inverse problems MBeck Teboulle SIAM Journal on Imaging Sciences 2 2009 * Active contours MBlake Isard 2000 Springer * Recovering non-rigid 3d shape from image streams ABregler HHertzmann Biermann Proceedings of IEEE Conference on Computer Vision and Pattern Recognition IEEE Conference on Computer Vision and Pattern Recognition 2000 * A singular value thresholding algorithm for matrix completion JECai ZCandes Shen SIAM Journal on Optimization 20 1956. 2010 * Robust principal component analysis EXCandes YLi JMa Wright Journal of the ACM 58 3 11 2011 * Geodesic active contours VCaselles RKimmel GSapiro International Journal of Computer Vision 22 1 1997 * Active contours without edges TChan LVese IEEE Transactions on Image Processing 10 2 2001 * Level set based shape prior segmentation TChan WZhu Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition the IEEE Conference on Computer Vision and Pattern Recognition 2005 * Using prior shapes in geometric active contours in a variational framework YHChen STagare FThiruvenkadam DHuang KWilson RGopinath EBriggs Geiser International Journal of Computer Vision 50 3 2002 * Active shape models -their training and application. Computer Vision and Image Understanding TCCootes DTaylor JCooper Graham 1995 61 * Dynamical statistical shape priors for level set-based tracking Cremers IEEE Transactions on Pattern Analysis and Machine Intelligence 28 8 2006 * Convex relaxation techniques for segmentation, stereo and multiview reconstruction TCremers KPock AKolev Chambolle Markov Random Fields for Vision and Image Processing MIT Press 2011 * A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape DMCremers RRousson Deriche International Journal of Computer Vision 72 2 2007 * Shape priors using manifold learning techniques PEtyngier FSegonne RKeriven Proceedings of IEEE International Conference on Computer Vision IEEE International Conference on Computer Vision 2007 * Matrix rank minimization with applications MFazel 2002 Stanford University PhD thesis * Object matching using deformable templates .YJain SZhong Lakshmanan IEEE Transactions on Pattern Analysis and Machine Intelligence 18 3 1996 * Snakes: Active contour models MKass AWitkin DTerzopoulos International Journal of Computer Vision 1 4 1988 * Statistical shape influence in geodesic active contours MLeventon WGrimson OFaugeras Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition the IEEE Conference on Computer Vision and Pattern Recognition 2000 * Shape modeling with front propagation: A level set approach RMalladi JSethian BVemuri IEEE Transactions on Pattern Analysis and Machine Intelligence 17 2 1995 * Physics-based deformable models: applications to computer vision, graphics, and medical imaging Metaxas 1996 Kluwer Academic Publishers * Gradient methods for minimizing composite objective function. CORE Discussion Papers YNesterov 2007 * Ultrasound image segmentation: A survey JNoble DBoukerroui IEEE Transactions on Medical Imaging 25 8 2006 * A level set approach for shape-driven segmentation and tracking of the left ventricle NParagios IEEE Transactions on Medical Imaging 22 6 2003 * Unsupervised co-segmentation of a set of shapes via descriptor-space spectral clustering OOSidi YVan Kaick HKleiman DZhang Cohen-Or ACM Transactions on Graphics 30 6 126 2011 * Statistical shape analysis: Clustering, learning, and testing SHSrivastava WJoshi XMio Liu IEEE Transactions on Pattern Analysis and Machine Intelligence 27 4 2005 * Boundary finding with parametrically deformable models LStaib JDuncan IEEE Transactions on Pattern Analysis and Machine Intelligence 14 11 1992 * Space-time tracking LTorresani CBregler Proceedings of the European Conference on Computer Vision the European Conference on Computer Vision 2002 * Modelbased curve evolution technique for image segmentation ATsai III. C. Tempany. D. Tucker. A. Fan. W. Grimson. and A. Willsky WYezziJr III. C. Tempany. D. Tucker. A. Fan. W. Grimson. and A. Willsky Wells III. C. Tempany. D. Tucker. A. Fan. W. Grimson. and A. Willsky Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition the IEEE Conference on Computer Vision and Pattern Recognition 2001 * Motion segmentation with missing data using powerfactorization and gpca RVidal RHartley Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition the IEEE Conference on Computer Vision and Pattern Recognition 2004 * Snakes, shapes, and gradient vector flow JXu Prince IEEE Transactions on Image Processing 7 3 1998 * Feature extraction from faces using deformable templates PYuille DHallinan Cohen International Journal of Computer Vision 8 2 99 1992 * Towards robust and effective shape modeling: Sparse shape composition. Medical image analysis SYZhang MZhan JDewan D NHuang XSMetaxas Zhou 2012 16 * Region competition: Unifying snakes, region growing, and bayes/mdl for multiband image segmentation SZhu AYuille IEEE Transactions on Pattern Analysis and Machine Intelligence 18 9 1996 * Segmentation of the left ventricle from cardiac mi' images using a subject-specific dynamical model YXZhu APapademetris JSinusas Duncan IEEE Transactions on Medical Imaging 29 3 2010