Multi-channel lexicon integrated CNN-BILSTM models for sentiment analysis

Joosung Yoon, Hyeoncheol Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

We improved sentiment classifier for predicting document-level sentiments from Twitter by using multi-channel lexicon embedidngs. The core of the architecture is based on CNN-BiLSTM that can capture high level features and long term dependency in documents. We also applied multi-channel method on lexicon to improve lexicon features. The macro-averaged F1 score of our model outperformed other classifiers in this paper by 1-4%. Our model achieved F1 score of 64% in SemEval Task 4 (2013-2016) datasets when multichannel lexicon embedding was applied with 100 dimensions of word embedding.

Original languageEnglish
Title of host publicationProceedings of the 29th Conference on Computational Linguistics and Speech Processing, ROCLING 2017
EditorsLun-Wei Ku, Yu Tsao, Chi-Chun Lee, Cheng-Zen Yang, Hung-Yi Lee, Richard T.-H. Tsai, Wen-Hsiang Lu, Shih-Hung Wu
PublisherThe Association for Computational Linguistics and Chinese Language Processing (ACLCLP)
Pages244-253
Number of pages10
ISBN (Electronic)9789869576901
Publication statusPublished - 2017 Nov 1
Event29th Conference on Computational Linguistics and Speech Processing, ROCLING 2017 - Taipei, Taiwan, Province of China
Duration: 2017 Nov 272017 Nov 28

Publication series

NameProceedings of the 29th Conference on Computational Linguistics and Speech Processing, ROCLING 2017

Conference

Conference29th Conference on Computational Linguistics and Speech Processing, ROCLING 2017
CountryTaiwan, Province of China
CityTaipei
Period17/11/2717/11/28

Keywords

  • CNN-BiLSTM
  • Deep Learning
  • Lexicon
  • Multi-Channel
  • Sentiment analysis

ASJC Scopus subject areas

  • Language and Linguistics
  • Speech and Hearing

Fingerprint Dive into the research topics of 'Multi-channel lexicon integrated CNN-BILSTM models for sentiment analysis'. Together they form a unique fingerprint.

Cite this