Methods to detect road features for video-based in-vehicle navigation systems

Kyoung Ho Choi, Soon Young Park, Seong Hoon Kim, Kisung Lee, Jeong Ho Park, Seong Ik Cho, Jong Hyun Park

Research output: Contribution to journalArticle

17 Citations (Scopus)

Abstract

Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.

Original languageEnglish
Pages (from-to)13-26
Number of pages14
JournalJournal of Intelligent Transportation Systems: Technology, Planning, and Operations
Volume14
Issue number1
DOIs
Publication statusPublished - 2010 Jan 1

Fingerprint

Navigation System
Bayesian networks
Navigation systems
Navigation
Railroad cars
Color
Bayesian Networks
Driver
Computer vision
Lighting
Bayesian Model
Ambiguous
Engines
Computer Vision
Network Model
Illumination
Engine
Intersection
Necessary
Simulation

Keywords

  • Bayesian network
  • Driving-lane recognition
  • Lane detection
  • Lane-color recognition
  • Support vector machines
  • Video-based navigation system

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Automotive Engineering
  • Aerospace Engineering
  • Control and Systems Engineering
  • Applied Mathematics
  • Computer Science Applications

Cite this

Methods to detect road features for video-based in-vehicle navigation systems. / Choi, Kyoung Ho; Park, Soon Young; Kim, Seong Hoon; Lee, Kisung; Park, Jeong Ho; Cho, Seong Ik; Park, Jong Hyun.

In: Journal of Intelligent Transportation Systems: Technology, Planning, and Operations, Vol. 14, No. 1, 01.01.2010, p. 13-26.

Research output: Contribution to journalArticle

Choi, Kyoung Ho ; Park, Soon Young ; Kim, Seong Hoon ; Lee, Kisung ; Park, Jeong Ho ; Cho, Seong Ik ; Park, Jong Hyun. / Methods to detect road features for video-based in-vehicle navigation systems. In: Journal of Intelligent Transportation Systems: Technology, Planning, and Operations. 2010 ; Vol. 14, No. 1. pp. 13-26.
@article{114612bfc18c4ecfbd6b27bc4cfd1dd6,
title = "Methods to detect road features for video-based in-vehicle navigation systems",
abstract = "Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.",
keywords = "Bayesian network, Driving-lane recognition, Lane detection, Lane-color recognition, Support vector machines, Video-based navigation system",
author = "Choi, {Kyoung Ho} and Park, {Soon Young} and Kim, {Seong Hoon} and Kisung Lee and Park, {Jeong Ho} and Cho, {Seong Ik} and Park, {Jong Hyun}",
year = "2010",
month = "1",
day = "1",
doi = "10.1080/15472450903386005",
language = "English",
volume = "14",
pages = "13--26",
journal = "Journal of Intelligent Transportation Systems",
issn = "1547-2450",
publisher = "Taylor and Francis Ltd.",
number = "1",

}

TY - JOUR

T1 - Methods to detect road features for video-based in-vehicle navigation systems

AU - Choi, Kyoung Ho

AU - Park, Soon Young

AU - Kim, Seong Hoon

AU - Lee, Kisung

AU - Park, Jeong Ho

AU - Cho, Seong Ik

AU - Park, Jong Hyun

PY - 2010/1/1

Y1 - 2010/1/1

N2 - Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.

AB - Understanding road features such as position and color of lane markings in a live video captured from a moving vehicle is essential in building video-based car navigation systems. In this article, the authors present a framework to detect road features in 2 difficult situations: (a) ambiguous road surface conditions (i.e., damaged roads and occluded lane markings caused by the presence of other vehicles on the road) and (b) poor illumination conditions (e.g., backlight, during sunset). Furthermore, to understand the lane number that a driver is driving on, the authors present a Bayesian network (BN) model, which is necessary to support more sophisticated navigation services for drivers such as recommending lane change at an appropriate time before turning left or right at the next intersection. In the proposed BN approach, evidence from (1) a computer vision engine (e.g., lane-color detection) and (2) a navigation database (e.g., the total number of lanes) was fused to more accurately decide the lane number. Extensive simulation results indicated that the proposed methods are both robust and effective in detecting road features for a video-based car navigation system.

KW - Bayesian network

KW - Driving-lane recognition

KW - Lane detection

KW - Lane-color recognition

KW - Support vector machines

KW - Video-based navigation system

UR - http://www.scopus.com/inward/record.url?scp=77149164441&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77149164441&partnerID=8YFLogxK

U2 - 10.1080/15472450903386005

DO - 10.1080/15472450903386005

M3 - Article

AN - SCOPUS:77149164441

VL - 14

SP - 13

EP - 26

JO - Journal of Intelligent Transportation Systems

JF - Journal of Intelligent Transportation Systems

SN - 1547-2450

IS - 1

ER -