# Automated Road Lane Detection for Intelligent Vehicles Anik Saha ? , Dipanjan Das Roy ? , Tauhidul Alam ? & Kaushik Deb ? A Abstract -Automated road lane detection is the crucial part of vision-based driver assistance system of intelligent vehicles. This driver assistance system reduces the road accidents, enhances safety and improves the traffic conditions. In this paper, we present an algorithm for detecting marks of road lane and road boundary with a view to the smart navigation of intelligent vehicles. Initially, it converts the RGB road scene image into gray image and employs the flood-fill algorithm to label the connected components of that gray image. Afterwards, the largest connected component which is the road region is extracted from the labeled image using maximum width and no. of pixels. Eventually, the outside region is subtracted and the marks or road lane and road boundary are extracted from connected components. The experimental results show the effectiveness of the proposed algorithm on both straight and slightly curved road scene images under different day light conditions and the presence of shadows on the roads. Keywords : Driver Assistance System, Computer Vision, Flood-fill Algorithm, Connected Component, Intelligent Vehicles. eal-time automated road lane detection is an indispensable part of intelligent vehicle safety system. The most significant development for intelligent vehicles is driver assistance system. This driver assistance system holds great promise in increasing safety, convenience and efficiency of driving. The driver assistance system involves camera-assisted system which takes the real-time images from the surroundings of the vehicle and displays relevant information to the driver. Thus, intelligent vehicles automatically collect the road lane information and vehicle position relative to the lane. Consequently, the system used by the intelligent vehicles provides the means to alert the drivers which are swerving off the lane without prior use of the blinker. So, intelligent vehicles will clearly enhance traffic safety if they are extensively taken into use. Fatalities and injuries resulting from road accidents have become the common phenomenon in Bangladesh and Asian countries. Hence, intelligent vehicle safety system can offer the reduction of fatalities and injuries by means of giving warning to the unaware drivers about the danger. Computer Vision based on image processing deals with the issues for sensing the environment in intelligent transportation system. The vision-based automated road lane detection approach emphasizes to identify the road lane markings along with road boundaries. Simultaneous detection of road lane markings and road boundaries is necessary for proper orientation of intelligent vehicles. This detection process is likely to be obstructed by the presence of other vehicles on the same lane and shadows on the road caused by trees, buildings etc. before a vehicle. This approach can also be affected on curved roads instead of straight roads and under different day light conditions. So, road lane detection attracted many researchers in recent decades and researchers had carried out many works to detect road lane from intelligent vehicles. According to research [1], a novel road lane detection approach was proposed based on lane geometrical features associated with the geometrical relationship between camera and road that reduces the computation cost. The method using HSI color model was also proposed in lane-marking detection [8]. In [5], authors suggested a framework fusing color, texture and edges to recognize the lane of country roads. A computer visionbased approach was proposed to detect multiple lanes on straight and curved roads in [2]. The occlusion conditions of road lane detection were ignored there. Authors applied Hough transformation to detect lane in various cases [2,4,9]. However, the algorithm based on Hough Transform requires more memory and high computational time. For traffic safety, lane detection for moving vehicles was designed in [6] despite having the same color of vehicles as the line marks and passing traffic. Apart from that, distribution of color components was measured to detect urban traffic images in [7]. However, in case of various meteorological and lighting conditions (day, night, sunny, rainy, snowy) and road conditions (occlusion, degraded road markings), noises significantly undermine the estimation result of road parameters in previous methods. To resolve this problem, Chen [3] proposed a robust algorithm for lane detection under various bad scenes. For road scene image, we can divide it into two main parts: the upper part and the lower part. It is true that the lower part usually contains more important objects than the upper one does. Conventionally, road lane detection algorithms ignore the upper part directly to reduce searching area and to aim for shortening its processing time. This paper presents road lane detection algorithm using labeling based on flood-fill algorithm, feature extraction and filtering. This algorithm is capable to detect lane on straight and curved roads under different day light conditions, shadows and other noises. Here, the whole road scene images are employed. The paper is organized as follows. In Section II, we introduce the environmental conditions assumed in this paper. The road lane detection algorithm is proposed in Section III. In Section IV, we provide experimental results to evaluate the performance of our algorithm. Eventually, we conclude this paper in Section V. Environmental conditions play an important role while road lanes are being detected. Road scene images can vary for different weather conditions like A road region marked by red color is shown in Fig. 3(a). The outside environment of road region does not belong to the region of interest. The algorithm searches pixels horizontally from left to right and from right to left simultaneously, marks those pixels by green color which is not part of red region and looks for red region as search goes on. After finding red region, it stops searching in this row and goes to the next row and does the same task. Outside region marked by green color is shown in Fig. 3(b).Outside region is not subtracted properly yet and will be subtracted by using some attributes in the next stage. The flowchart of labeling is depicted in Fig. 4. # i. Conversion from RGB to Gray Image RGB images are composed of three independent channels for red, green and blue primary color components. So, for RGB to grayscale conversion, primarily we take three channel values of each pixel and make an average of those values which is the gray-level value for the corresponding pixel in the grayscale image. Pixels throughout the RGB image are scanned and this procedure is applied to convert it into grayscale image. ii. # Apply Flood-fill Algorithm Flood fill is an algorithm that determines the connected area to a given node in a multi-dimensional array. We have used flood fill algorithm to detect different connected components. The algorithm takes three parameters: a start node, a target intensity value and a replacement integer value. We utilize the algorithm to check all nodes in the array that are connected to the start node by a path of the target intensity value and modify them by the replacement integer value. Thus, we figure out the region of relatively similar intensity. # b) Feature Extraction Feature extraction is the next stage of our algorithm. At this stage, width of each connected component is calculated. For finding the width, the algorithm searches the grid of pixels horizontally and keeps track of current width if it is greater than previously stored width for a connected component. Next, we consider number of pixels in each connected component. For finding total number of pixels in a connected component, it searches throughout the labeled image and counts the number for each we do not have any concern with the region which does not belong to the road. Hence, we work with the regions that are on the road. On the road, lots of unwanted region may be found. To subtract those regions we use two attributes, one is lane width and another is lane intensity. In Fig. 7(a), if any regions width is greater than one by eighteenth times of original image width then subtract the region because width of road lane lies inside this value. The output using this attribute is shown in Fig. 7(b). And if any pixel has lower intensity value than 170 then subtract these pixels because road lanes are white. The output using this attribute is shown in Fig. 8 1![Fig.1: Diversity of road scene images different barriers, or even nothing are the marks of road lane and road boundary. The road surface consists of light or dark pavements or combination. Different road scene images on various day light and shadowing conditions are shown in Fig. 1. Solid and dashed lane marks on road scene images under good day light conditions are easy for detection. Detection of same](image-2.png "Fig. 1 :") 28![Fig.2 : Architecture of the system a) Connected Component Labeling Connected component labeling is the initial stage of our algorithm. At this part of our algorithm, we firstly convert the color image into grayscale image.Next, we employ flood-fill algorithm to label connected component. We have assumed that all the pixels are 8connected neighborhood and a pixel is connected to its neighbor if the intensity difference between them is less](image-3.png "Fig. 2 :than 8 .") 3![Fig.3 : Road region marked by red color (a) and outside region marked by green color (b)](image-4.png "Fig. 3 :") 4![Fig.4 : Flowchart of labeling c) Unwanted Region Subtraction Unwanted region subtraction along with filtering extracted connected component is the final stage of our algorithm. It plays a significant part of road lane detection. Using the feature, we find regions from the labeled image and we subtract many regions from those. The outer-side of the road is subtracted because](image-5.png "Fig. 4 :") 78![Fig.7 : Labeling image (a) and output taking width as an attribute (b)](image-6.png "Fig. 7 :Fig. 8 :") ![(b). All experiments are done on Pentium-D 2.80GHz with 512MB RAM under Microsoft Visual Studio 2008 environment. Image with resolution minimum of 400*400 and maximum of 600*600 are used. A database along with a growing number of images is used for the experiment. All these images are taken in highways and normal roads with dashed road lane and solid road boundary markings on straight and curved roads under different daylight conditions (sunny, cloudy and shadowing).](image-7.png "") 9![Fig.9 : Average accuracy of lane detection in different road conditions](image-8.png "Fig. 9 :") 9![Figure 9 illustrates the performance of the system under different road conditions. We had taken 50 pictures under each condition and got the above results. Additionally, Fig. 10(a) and 10(b) are the output of Fig. 1(a) and 1(b) which are taken at good illumination condition, Similarly, Fig. 10(c) and 10(d) present the output of Fig. 1(c) and 1(d) which are taken in shadow](image-9.png "Figure 9") 1ParameterValuesConnectivity (intensity difference)<8Lane width (ratio)< (1/18) th times of original image widthLane intensity> 170 © 2012 Global Journals Inc. (US) 2012 © 2012 Global Journals Inc. (US) © 2012 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XII Issue VI Version I © 2012 Global Journals Inc. (US) 2012 March An automated road lane detection algorithm on images taken from an intelligent vehicle is proposed in this paper. The algorithm starts with the conversion of color (RGB) road scene image to grayscale image. The flood-fill algorithm is used to label the connected components of grayscale image. The largest connected component is extracted from labeled image subsequently. Finally, the unwanted region of road scene image is subtracted and the extracted connected component is filtered to detect white marks of road lane and road boundary. The algorithm is tested on a good number of road scene images. These images are taken from straight and slightly curved road under different day light and occlusion (of vehicles and people) conditions. Experimental results show that the algorithm achieves good accuracy despite the shadow conditions of road. However, the road lane detection algorithm still has some problems such as critical shadow condition of the image and color of road lanes other than white. Therefore, our future work will be the improvement of the algorithm to overcome these problems. ## This page is intentionally left blank Global Journal of Computer Science and Technology Volume XII Issue VI Version I * A novel Lane Detection based on Geometrical Model and Gabor Filter ShengyanZhou YanhuaJiang JunqiangXi JianweiGong GuangmingXiong HuiyanChen Proc. of the IEEE Intelligent Vehicles Symposium of the IEEE Intelligent Vehicles Symposium June, 2010 * Computer Vision-Based Multiple-lane Detection in a Straight road and in a Curve YanJiang FengGao GuoyanXu Proc. of the IEEE Image Analysis and Signal Processing of the IEEE Image Analysis and Signal essing April, 2010 * Intelligent Vehicle Oriented Lane Detection Approach under Bad Road Scene HuanShen ShunmingLi FangchaoBo XiaodongMiao FangpieLi WenyuLu Proc. of the Computer and Information Technology of the Computer and Information Technology IEEE October, 2009 * Real Time Lane Detection for Autonomous Vehicles AmAbdulhakam Assidiq OOthman MdKhalifa SherozIslam Khan Computer and Communication Engineering 2008 IEEE * Lane Recognition on Country Roads UFranks HLoose CKnoppel Intelligent Vehicle Symposium IEEE 2007 * Lane Detection With Moving Vehicles in the Traffic Scenes -Yung Cheng Bor-Shenn Jeng Pei-TingHsu K.-CTseng Fan Intelligent Transportation Systems 2006 IEEE * Color-based road detection in urban traffic scenes Yinghua He Hong Wang BoZhang Intelligent Transportation Systems 2006 IEEE * HSI Color Model Based Lane Marking Detection" Intelligent Transportation Systems Tsung-Ying Sheng-JengSun VinchentTsai Chan 2006 IEEE * Linear Feature Extraction using combined approach of Hough transform, Eigen values and Raster scan Algorithms" Intelligent Sensing and Information Processing JPrakash MBMeenavathi KRajesh 2006 IEEE