Página 12 - ANAlitica6

Versión de HTML Básico

Elsy Gómez-Ramos y Francisco Venegas-Martínez
Analíti a
k
6
Revista de Análisis Estadístico
Journal of Statistical Analysis
2. In the second group, we can locate the Recurrent Net-
works (RCNs) that are characterized by the dynamism
of their connectivity, so these networks stores informa-
tion that will be used later. The networks that share this
feature are: Elman Network (ELN) (Kuan and Liu, 1995;
Selvaratnam and Kirley, 2006; Sitte, 2002; Yumlu
et al
.
2005); the modifications to Elman network (Kodogian-
nis and Lolis, 2002; Perez-Rodriguez
et al
. 2005); Par-
tially Recurrent Networks (PRN) (Kodogiannis and Lo-
lis, 2002; Perez-Rodriguez
et al
. 2005) and Autoregressi-
ve Networks (ARN) (Kodogiannis and Lolis, 2002).
3. Through the third group, we can find the Polynomial
Networks (PLNs) which typically offer efficient pro-
cessing of polynomial input variables, otherwise if we
would apply the sigmoidal or gaussian functions in the
training, although it would be exhaustive. The networks
that share this feature are: Pi-sigma networks such as
Ridge Polynomial Networks (RPN) and its dynamic ver-
sion (Ghazali
et al
. 2007, 2009 and 2011), as well as the
Function Link Network (FLN) (Hussain
et al
. 2008).
4. In the fourth group are the Modular Networks (MNs)
that consists of various modules (networks) which allow
solving tasks separately and then combining the ans-
wers in a logical manner. One possibility is to use dif-
ferent network architectures (Zhang and Berardi, 2001)
and another alternative is to apply different initializa-
tion weights leaving the same network architectures
(Adeodato
et al
. 2011; Zhang and Berardi, 2001).
5. Through the fifth group, we can find the Support Vector
Machine (SVM), this network belongs to the kernel base
models or nucleus. The idea is to construct a hyperpla-
ne as a decision surface which maximizes the margin of
separation (Carpinteiro
et al
. 2011; Kara
et al
. 2011; Shen
et al
. 2011).
MLP
ELN
MDNs
DRPNN
.
Figura 1.
Some neural networks applied in stock market and exchange rate forecasting. Source: author elaboration.
4 A comparative Analysis of ANN
In this section is exposed the main characteristics of the
ANNs. All these characteristics (o properties) take as a re-
ference point the MLP:
a) Because MLP does not have a dynamic structure, RCNs
are proposed as an alternative. Therefore, the ELN
could have a better performance than that of the MLP
(Perez-Rodriguez
et al
. 2005; Selvaratnam and Kirley,
2006; Sitte, 2002). However, in the ELN all nodes are
connected to other nodes can make the training diffi-
cult, another proposal is the PARN (Kodogiannis and
Lolis, 2002; Perez-Rodriguez
et al
. 2005) or the ARN in
which case a more efficient training is expected (Kiani
and Kastens, 2008; Kodogiannis and Lolis, 2002) this is
so because the nodes are connected by themselves.
b) To find the best network is usually based on trial and
error criteria, that is why this kind of methods waste
information and time (this usually reflects in unstable
forecasts) so using MDNs with different sizes of net-
works would avoid these selection process (Adeodato
et al
. 2011; Zang and Berardi, 2001). Another proposal
is to apply the GMDHN which increases in size during
the training (Pham and Lui, 1995), or apply the DNN,
which increases the number of hidden layers dynami-
cally (Guresen, Kayakutlu and Daim, 2010).
10
Analítika,
Revista de análisis estadístico
, 3 (2013), Vol. 6(2): 7-15