A Meta-Cognitive Learning Algorithm for an Extreme Learning Machine Classifier

R. Savitha, S. Suresh, Hyong Joong Kim

Research output: Contribution to journalArticle

47 Citations (Scopus)

Abstract

This paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely 'Meta-cognitive Extreme Learning Machine (McELM).' McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.

Original languageEnglish
Pages (from-to)253-263
Number of pages11
JournalCognitive Computation
Volume6
Issue number2
DOIs
Publication statusPublished - 2014 Jan 1

Fingerprint

Learning algorithms
Learning systems
Classifiers
Learning
Neurons
Hinges
Least-Squares Analysis
Weights and Measures
Machine Learning
Cognition
Benchmarking
Aptitude
Mean square error
Decision Making
Decision making
Chemical activation

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

A Meta-Cognitive Learning Algorithm for an Extreme Learning Machine Classifier. / Savitha, R.; Suresh, S.; Kim, Hyong Joong.

In: Cognitive Computation, Vol. 6, No. 2, 01.01.2014, p. 253-263.

Research output: Contribution to journalArticle

@article{e6b261729cc046aca281007fea4e1b5a,
title = "A Meta-Cognitive Learning Algorithm for an Extreme Learning Machine Classifier",
abstract = "This paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely 'Meta-cognitive Extreme Learning Machine (McELM).' McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.",
keywords = "Classification, Extreme learning machine, Hinge-loss error function, Meta-cognition, Self-regulatory learning mechanism",
author = "R. Savitha and S. Suresh and Kim, {Hyong Joong}",
year = "2014",
month = "1",
day = "1",
doi = "10.1007/s12559-013-9223-2",
language = "English",
volume = "6",
pages = "253--263",
journal = "Cognitive Computation",
issn = "1866-9956",
publisher = "Springer New York",
number = "2",

}

TY - JOUR

T1 - A Meta-Cognitive Learning Algorithm for an Extreme Learning Machine Classifier

AU - Savitha, R.

AU - Suresh, S.

AU - Kim, Hyong Joong

PY - 2014/1/1

Y1 - 2014/1/1

N2 - This paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely 'Meta-cognitive Extreme Learning Machine (McELM).' McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.

AB - This paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely 'Meta-cognitive Extreme Learning Machine (McELM).' McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.

KW - Classification

KW - Extreme learning machine

KW - Hinge-loss error function

KW - Meta-cognition

KW - Self-regulatory learning mechanism

UR - http://www.scopus.com/inward/record.url?scp=84901200630&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84901200630&partnerID=8YFLogxK

U2 - 10.1007/s12559-013-9223-2

DO - 10.1007/s12559-013-9223-2

M3 - Article

VL - 6

SP - 253

EP - 263

JO - Cognitive Computation

JF - Cognitive Computation

SN - 1866-9956

IS - 2

ER -