Logo

Enhanced conjugate gradient methods for training MLP-networks

Izzeldin, H. and Asirvadam , Vijanth Sagayan and Saad , Nordin (2010) Enhanced conjugate gradient methods for training MLP-networks. In: Research and Development (SCOReD), 2010 IEEE Student Conference on, 13-14 December 2010, Putrajaya.

[img] PDF (Published IEEE paper) - Published Version
Restricted to Registered users only

704Kb

Official URL: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arn...

Abstract

The paper investigates the enhancement in various conjugate gradient training algorithms applied to a multilayer perceptron (MLP) neural network architecture. The paper investigates seven different conjugate gradient algorithms proposed by different researchers from 1952-2005, the classical batch back propagation, full-memory and memory-less BFGS (Broyden, Fletcher, Goldfarb and Shanno) algorithms. These algorithms are tested in predicting fluid height in two different control tank benchmark problems. Simulations results show that Full-Memory BFGS has overall better performance or less prediction error however it has higher memory usage and longer computational time conjugate gradients.

Item Type:Conference or Workshop Item (Paper)
Uncontrolled Keywords:BFGS;Broyden Fletcher Goldfarb and Shanno;MLP;MLP networks;conjugate gradient methods enhancement;fluid height prediction;gradient training algorithms;multilayer perceptron;neural network architecture;tank benchmark problems;gradient methods;learning (artificial intelligence);multilayer perceptrons;
Subjects:T Technology > TK Electrical engineering. Electronics Nuclear engineering
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments / MOR / COE:Centre of Excellence > Centre for Automotive Research
Departments > Electrical & Electronic Engineering
ID Code:4632
Deposited By: Dr Vijanth Sagayan Asirvadam
Deposited On:05 Dec 2011 03:01
Last Modified:19 Jan 2017 08:23

Repository Staff Only: item control page

Document Downloads

More statistics for this item...