Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach

Ramasamy Saravanakumar, Hyung Soo Kang, Choon Ki Ahn, Xiaojie Su, Hamid Reza Karimi

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R)-α-dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

Original languageEnglish
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusAccepted/In press - 2018 Aug 1

Fingerprint

Learning algorithms
Stabilization
Learning
Neural networks
Asymptotic stability
Linear matrix inequalities
Polynomials

Keywords

  • Dissipativity learning
  • Legendre polynomial
  • neural networks
  • robust stabilization

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

Robust Stabilization of Delayed Neural Networks : Dissipativity-Learning Approach. / Saravanakumar, Ramasamy; Kang, Hyung Soo; Ahn, Choon Ki; Su, Xiaojie; Karimi, Hamid Reza.

In: IEEE Transactions on Neural Networks and Learning Systems, 01.08.2018.

Research output: Contribution to journalArticle

@article{d4ecc9440cf3445ca7144bf9f8f7eb1d,
title = "Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach",
abstract = "This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R)-α-dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.",
keywords = "Dissipativity learning, Legendre polynomial, neural networks, robust stabilization",
author = "Ramasamy Saravanakumar and Kang, {Hyung Soo} and Ahn, {Choon Ki} and Xiaojie Su and Karimi, {Hamid Reza}",
year = "2018",
month = "8",
day = "1",
doi = "10.1109/TNNLS.2018.2852807",
language = "English",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",

}

TY - JOUR

T1 - Robust Stabilization of Delayed Neural Networks

T2 - Dissipativity-Learning Approach

AU - Saravanakumar, Ramasamy

AU - Kang, Hyung Soo

AU - Ahn, Choon Ki

AU - Su, Xiaojie

AU - Karimi, Hamid Reza

PY - 2018/8/1

Y1 - 2018/8/1

N2 - This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R)-α-dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

AB - This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R)-α-dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

KW - Dissipativity learning

KW - Legendre polynomial

KW - neural networks

KW - robust stabilization

UR - http://www.scopus.com/inward/record.url?scp=85050997468&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85050997468&partnerID=8YFLogxK

U2 - 10.1109/TNNLS.2018.2852807

DO - 10.1109/TNNLS.2018.2852807

M3 - Article

C2 - 30072342

AN - SCOPUS:85050997468

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

ER -