Neural Networs

Artificial Neural networks (ANNs) can solve both supervised and unsupervised problems, such as clustering and modeling of qualitative responses (classification). Basically, ANN is supposed to mimic the action of a biological network of neurons, where each neuron accepts different signals from neighbouring neurons. Each neuron can give an output signal: the function which calculates the output vector from the input vector is composed of two parts; the first part evaluates the net input and is a linear combination of the input variables, multiplied with coefficients called weights; the second part transfers the net input in a non-linear manner to the output vector. Artificial neural networks can be composed of different numbers of neurons, placed into one or more layers.

Among ANN learning strategies, Kohonen Maps and Counterpropagation Artificial Neural Networks are two of the most popular approaches.

[-> top]

Kohonen maps

Kohonen Maps are self-organising systems applied to the unsupervised problems (cluster analysis and data structure analysis). In Kohonen maps similar input objects are linked to the topological close neurons in the network. Basically, the neurons have as many weights as the number of responses in the target vectors and learn to identify the location in the ANN that is most similar to the input vectors; the weights of the net are updated on the basis of the input object, i.e. the network is modified each time an object is introduced and all the objects are introduced for a certian number of times (epochs). An example of the structure of a Kohonen map with dimension 5x5, built for a dataset described by

*p*variables is shown in the following picture.

The Kohonen map is usually characterized by being a squared (or hexagonal) toroidal space, that consists of a grid of

*N*neurons, where

^{2}*N*is the number of neurons for each side of the squared space. Each neuron contains as many elements (weights) as the number of input variables. The weights of each neuron are randomly initialised between 0 and 1 and updated on the basis of the input vectors (i.e. samples), for a certain number of times (called training epochs). Both the number of neurons and epochs to be used to train the map must be defined by the user. Kohonen maps can be trained by means of sequential or batch training algorithms.

When the

*sequential training*is adopted, in each training step samples are presented to the network, one at a time and weights are updated on the basis of the winner neuron. In each training step, samples are presented to the network, one at a time. For each sample (

**x**

_{i}), the most similar neuron (i.e. the winning neuron) is selected on the basis of the Euclidean distance. Then, the weights of the r-th neuron (

**w**

_{r}) are changed as a function of the difference between their values and the values of the sample; this correction (Δ

**w**

_{r}) is scaled according to the topological distance from the winning neuron (

*d*

_{r}):

*η*is the learning rate and

*d*the size of the considered neighbourhood, that decreases during the training phase. The topological distance

_{max}*d*

_{r}is defined as the number of neurons between the considered neuron

*r*and the winning neuron. The learning rate

*η*changes during the training phase, as follows:

*t*is the number of the current training epoch,

*t*is the total number of training epochs,

_{tot}*η*and

^{start}*η*are the learning rate at the beginning and at the end of the training, respectively.

^{final}When the

*batch training*is used, the whole set of samples is presented to the network and winner neurons are found; after this, the map weights are updated with the effect of all the samples:

**w**

_{r}are the updated weights of the r-th neuron,

**x**

_{i}is the i-th sample,

*I*is the total number of samples and

*u*is the weighting factor of the winning neuron related to sample

_{i}*i*with respect to neuron

*r*:

*η*,

*d*and

_{max}*d*

_{r}are defined as before.

At the end of the network training, samples are placed in the most similar neurons of the Kohonen map; in this way data structure can be visualised and the role of the experimental variables in defining the data structure can be elucidated by looking at the Kohonen weights.

The

**Kohonen and CP-ANN toolbox**builds Kohonen maps in the same way as described in the following paper:

Zupan J, Novic M, Ruisánchez I. (

**1997**) Kohonen and counterpropagation artificial neural networks in analytical chemistry.

*Chemometrics and Intelligent Laboratory Systems*

**38**1-23.

In order to use Kohonen maps, read how to build them by means of the

**Kohonen and CP-ANN toolbox**.

[-> top]

Counterpropagation Artificial Neural Networks

Counterpropagation Artificial Neural Networks (CP-ANNs) are very similar to the Kohonen Maps and are essentially based on the Kohonen approach, but combines characteristics from both supervised and unsupervised learning, i.e. CP-ANNs can be used to build both regression or classification models. CP-ANNs of the

**Kohonen and CP-ANN toolbox**are able to build just classification models, where classification consists in finding a mathematical model able to recognize the membership of each object (sample) to its proper class on the basis of a series of measurements (the classes must be defined a priori). To do so, an output layer is added to the Kohonen ANN:

Counterpropagation Artificial Neural Networks can be considered as an extension of Kohonen maps. A CP-ANN consists of two layers, a Kohonen layer and an output layer (also called Grosberg layer). When dealing with supervised classification, the class vector is unfolded into a matrix

**C**, with

*I*rows and

*G*columns (the unfolded class information), where

*I*is the number of samples and

*G*the total number of classes; each entry

*c*of

_{ig}**C**represents the membership of the i-th object to the g-th class expressed with a binary code (0 or 1). Then, the weights of the r-th neuron in the output layer (

**y**

_{r}) are updated in a supervised manner on the basis of the winning neuron selected in the Kohonen layer. Considering the class of each sample

*i*, the update is calculated as follows:

*d*

_{r}is the topological distance between the considered neuron

*r*and the winning neuron selected in the Kohonen layer;

*c*

_{i}is the i-th row of the unfolded class matrix

**C**, that is, a G-dimensional binary vector representing the class membership of the i-th sample.

At the end of the network training, each neuron of the Kohonen layer can be assigned to a class on the basis of the output weights and all the samples placed in that neuron are automatically assigned to the corresponding class. As a consequence, CP-ANNs are also able to recognize samples belonging to none of the class spaces. This happens when samples are placed in neurons whose output weights are similar, that is, the neuron cannot be assigned to a specific class.

The

**Kohonen and CP-ANN toolbox**builds CP-ANNs in the same way as described in the following paper:

Zupan J, Novic M, Ruisánchez I. (

**1997**) Kohonen and counterpropagation artificial neural networks in analytical chemistry.

*Chemometrics and Intelligent Laboratory Systems*

**38**1-23.

In order to build classification models by menas of CP-ANNs maps, read how to do that with the

**Kohonen and CP-ANN toolbox**.

[-> top]

Supervised Kohonen networks (SKN)

Supervised Kohonen networks (SKN) are supervised methods for building classification models. In Supervised Kohonen networks (SKNs), the input map and the output map are ‘glued’ together and forming a combined input-output map that is updated accordingly to the kohonen maps training scheme. Each input vector X (x1,x2,…,xp) and its corresponding output vector Y (y1,y2,…) are linked together to serve as input for the shared kohonen network. In order to achieve a model with good predictive performances, the input and output variables in the training set should be scaled properly; a scaling coefficient (scalar) for output vector (Y) is used here for tuning the influence of output variables on building the classification model.

An useful paper on Supervised Kohonen networks:

Melssen W., Wehrens R., Buydens L. (

**2006**) Supervised Kohonen networks for classification problems.

*Chemometrics and Intelligent Laboratory Systems*

**83**99-113.

The

**Kohonen and CP-ANN toolbox**builds Supervised Kohonen networks in the same way as described in this paper. In order to build classification models by menas of Supervised Kohonen networks maps, read how to do that with the

**Kohonen and CP-ANN toolbox**.

[-> top]

XY-fused networks (XYF)

XY-fused networks (XYF) are supervised methods for building classification models. In XY-fused networks (XYFs), similarity of the input vector with input map (Sx) and similarity of output vector with the output map (Sy) are calculated separately and then fused together to form a fused similarity (Sfused). The fused siumilarity is then used for finding the winner neuron. Influence of Sx on Sfused decreases linearly in epochs, while effect of Sy on Sfused increases linearly in epochs. Accordingly, at the initial stage of the training the similarity between the input objects X and the neurons in the input map dominates in generation of top map (determination of the winner). At the final stage of training, similarity of the output vector and output map controls the top map generation. In this way, both similarities from input and output map is shared equally for training of the network.

An useful paper on XY-fused networks:

Melssen W., Wehrens R., Buydens L. (

**2006**) Supervised Kohonen networks for classification problems.

*Chemometrics and Intelligent Laboratory Systems*

**83**99-113.

The

**Kohonen and CP-ANN toolbox**builds XY-fused networks in the same way as described in this paper. In order to build classification models by menas of XY-fused networks, read how to do that with the

**Kohonen and CP-ANN toolbox**.

[-> top]