Two modifications of Levenberg-Marquardt's method for fast batch neural network training

Viktor Sobetskyy, Stanisław Grzegórski

Abstract


The problems of artificial neural networks learning and their parallelisation are taken up in this article.The article shows comparison of the Levenberg-Marquardt's method (LMM) and its two modifications JWM (method with Jacobian matrices formed in each step) and BKM (Jacobian calculations only in the first step) for training artificial neural networks. These algorithms have the following properties: 1) simpler calculations; 2) they are partly parallelized. The experiments proved their efficiency. Experimental results demonstrate that neural network for training by them needs a similar number of epochs as the LMM and lesser time for training.

Full Text:

PDF


DOI: http://dx.doi.org/10.17951/ai.2004.2.1.31-36
Date of publication: 2015-01-04 00:00:00
Date of submission: 2016-04-27 10:11:05


Statistics


Total abstract view - 429
Downloads (from 2020-06-17) - PDF - 0

Indicators



Refbacks

  • There are currently no refbacks.


Copyright (c) 2015 Annales UMCS Sectio AI Informatica

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.