Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: A matrix inequality approach

Choon Ki Ahn, Moon Kyou Song

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.

Original languageEnglish
Article number68
JournalJournal of Inequalities and Applications
Volume2013
DOIs
Publication statusPublished - 2013 Dec 1

Fingerprint

Dynamic Neural Networks
Recurrent neural networks
Recurrent Neural Networks
Neural Network Model
Matrix Inequality
Continuous Time
Neural networks
Numerical Algorithms
Disturbance
Norm
Numerical Examples
Formulation
Demonstrate
Learning
Standards

Keywords

  • Disturbance
  • Dynamic neural network models
  • Exponential peak-to-peak norm performance
  • Matrix inequality
  • Training law

ASJC Scopus subject areas

  • Analysis
  • Applied Mathematics
  • Discrete Mathematics and Combinatorics

Cite this

@article{c4523ed1a86b445e8db10f956727a8dd,
title = "Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: A matrix inequality approach",
abstract = "The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.",
keywords = "Disturbance, Dynamic neural network models, Exponential peak-to-peak norm performance, Matrix inequality, Training law",
author = "Ahn, {Choon Ki} and Song, {Moon Kyou}",
year = "2013",
month = "12",
day = "1",
doi = "10.1186/1029-242X-2013-68",
language = "English",
volume = "2013",
journal = "Journal of Inequalities and Applications",
issn = "1025-5834",
publisher = "Springer Publishing Company",

}

TY - JOUR

T1 - Peak-to-peak exponential direct learning of continuous-time recurrent neural network models

T2 - A matrix inequality approach

AU - Ahn, Choon Ki

AU - Song, Moon Kyou

PY - 2013/12/1

Y1 - 2013/12/1

N2 - The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.

AB - The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.

KW - Disturbance

KW - Dynamic neural network models

KW - Exponential peak-to-peak norm performance

KW - Matrix inequality

KW - Training law

UR - http://www.scopus.com/inward/record.url?scp=84892851642&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84892851642&partnerID=8YFLogxK

U2 - 10.1186/1029-242X-2013-68

DO - 10.1186/1029-242X-2013-68

M3 - Article

AN - SCOPUS:84892851642

VL - 2013

JO - Journal of Inequalities and Applications

JF - Journal of Inequalities and Applications

SN - 1025-5834

M1 - 68

ER -