FLSNet: Robust Facial Landmark Semantic Segmentation

Hyungjoon Kim, Hyeonwoo Kim, Jehyeok Rew, Eenjun Hwang

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


The human face is one of the most viewed visual objects in a person's life and is used for identifying a person through facial landmarks, which includes the eyes, nose, mouth, and ears that make up a face. It is also possible to communicate nonverbally through the movements of facial landmarks; that is, change of facial expression. Thus, facial landmarks play a crucial role in human-related image analysis. Automatic facial landmark detection is a challenging problem in the field of computer vision, and various studies are underway. The emergence of Deep Neural Networks has played an important role in solving difficult problems in computer vision. Semantic segmentation is a field in which images are classified into pixel units and has also developed rapidly by incorporating deep learning. In this paper, we propose a method for accurately extracting facial landmarks using semantic segmentation. First, we introduce a semantic segmentation architecture for sophisticated landmark detection, and datasets composed of facial images and ground truth pairs. Then, we suggest how improve the performance of pixel classification by adjusting the imbalance of the number of pixels according to the face landmark. Through extensive experiments, we evaluated our approach using the metrics pixel accuracy and intersection over union.

Original languageEnglish
Article number9123397
Pages (from-to)116163-116175
Number of pages13
JournalIEEE Access
Publication statusPublished - 2020


  • Facial landmark
  • deep neural networks
  • network architecture
  • pixel unbalance
  • semantic segmentation
  • weighted feature map

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)


Dive into the research topics of 'FLSNet: Robust Facial Landmark Semantic Segmentation'. Together they form a unique fingerprint.

Cite this