Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach

Ramasamy Saravanakumar, Hyung Soo Kang, Choon Ki Ahn, Xiaojie Su, Hamid Reza Karimi

Research output: Contribution to journalArticle

8 Citations (Scopus)


This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R)-α-dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

Original languageEnglish
JournalIEEE Transactions on Neural Networks and Learning Systems
Publication statusAccepted/In press - 2018 Aug 1


  • Dissipativity learning
  • Legendre polynomial
  • neural networks
  • robust stabilization

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach'. Together they form a unique fingerprint.

  • Cite this