Extreme Learning Machine (ELM) and OPELM(’s)
Résumé : Neural networks (NN) and support vector machines (SVM) play
key roles in machine learning and data analysis in the past 2-3
decades. However, it is known that these popular learning techniques
face some challenging issues such as : intensive human intervene, slow
learning speed, poor learning scalability. A "new" learning technique
referred to as Extreme Learning Machine (ELM) is facing such problems.
ELM not only learns up to tens of thousands faster than NN and SVMs,
but also provides unified implementation for regression, binary and
multi-class applications.
In this seminar, the basic ELM, the optimally pruned extreme learning
machine (OP-ELM) methodology and the TROP-ELM are presented. Both
OP-ELM and TROP-ELM are based on the original extreme learning machine
(ELM) algorithm with additional steps to make it more robust and
generic. Both methodologies are presented in detail and then applied
to several regression and classification problems. Results for both
computational time and accuracy (mean square error) are compared to
the original ELM and to three other widely used methodologies :
multilayer perceptron (MLP), support vector machine (SVM), and
Gaussian process (GP). As the experiments for both regression and
classification illustrate, the proposed OP-ELM and the TRO-ELM
methodologies perform several orders of magnitude faster than the
other algorithms used, except the original ELM. Despite the simplicity
and fast performance, OP-ELM and TROP-ELM are still able to maintain
an accuracy that is comparable to the performance of the SVM.
Cet exposé se tiendra en salle C20-13, 20ème étage, Université
Paris 1, Centre Pierre Mendès-France, 90 rue de Tolbiac, 75013 Paris
(métro : Olympiades).