Self Organising Maps: An Interesting Tool for Exploratory Data Analysis
Haadvitha Nimmagadda1*, Sindhu Sukhavasi2, S. S. Babu3
1Dept of EEE, RVR and JCCE Engineering College, Guntur, Andhra Pradesh.
2Dept of ECE, Nagarjuna University, Guntur, Andhra Pradesh.
3Bapatla college, Bapatla, Andhra Pradesh.
*Corresponding Author: haadvitha.nimmagadda@yahoo.com
ABSTRACT:
The Self-Organizing Map (SOM) is an unsupervised neural network based on competitive learning. It projects high-dimensional input data onto a low dimensional (usually two-dimensional) space. Because it preserves the neighborhood relations of the input data, the SOM is a topology-preserving technique. A topographic map is a two-dimensional, nonlinear approximation of a potentially high-dimensional data manifold, which makes it an appealing instrument for visualizing and exploring high-dimensional data. The Self-Organizing Map (SOM) is the most widely used algorithm, and it has led to thousands of applications in very diverse areas. In this study, we will introduce the SOM algorithm, discuss its properties and applications, and also discuss some of its extensions and new types of topographic map formation, such as the ones that can be used for processing categorical data, time series and tree structured data.
KEY WORDS: Self-Organizing Map, SOM, SOM ALGORITHM, Topology.
INTRODUCTION:
Self-Organizing Map (SOM) projects high-dimensional input data onto a low dimensional (usually two-dimensional) space. Because it preserves the neighborhood relations of the input data, the SOM is a topology-preserving technique. Nowadays, the SOM is often used as a statistical tool for multivariate analysis, because it is both a projection method that maps high dimensional data to low dimensional space and a clustering and classification method that order similar data patterns onto neighboring SOM units.1, 2
One of the most prominent features of the mammalian brain is the topographical organization of its sensory cortex: neighboring nerve cells (neurons) can be driven by stimuli originating from neighboring positions in the sensory input space, and neighboring neurons in a given brain area project to neighboring neurons in the next area. In other words, the connections establish a so-called neighborhood-preserving or topology-preserving map, or topographic map for short.
In the visual cortex, we call this a retinotopic map; in the somatosensory cortex a somatotopic map (a map of the body surface), and in the auditory cortex a tonotopic map (of the spectrum of possible sounds)
The study of topographic map formation, started with basically two types of self-organizing processes, gradient-based learning and competitive learning, and two types of network architectures In the first architecture, which is commonly referred to as the Willshaw-von der Malsburg model are two sets of neurons, arranged in two (one- or) two-dimensional layers or lattices (Fig. 1.1A). Topographic map formation is concerned with learning a mapping for which neighboring neurons in the input lattice are connected to neighboring neurons in the output lattice.3, 4
The second architecture is far more studied, we now have continuously valued inputs taken from the input space Rd, or the data manifold V ⊆ RRd, which need not be rectangular or have the same dimensionality as the lattice to which it projects (Fig. 1.1B). To every neuron i of the lattice A corresponds a reference position in the input space, called the weight vector w i= [w i j ] ∈ Rd . All neurons receive the same input vector V = [V1,..,Vd ] ∈V. Topographic map formation is concerned with learning a map VA of the data manifold V (grey shaded area in Fig.1.2), in such a way that neighboring lattice neurons, i, j, with lattice positions r i, rj, code for neighboring positions, wi, wj, in the input space (cf., the inverse mapping,ᴪ) . The forward mapping, F, from the input space to the lattice, is not necessarily topology-preserving – neighboring weights do not necessarily correspond to neighboring lattice neurons–, even after learning the map, due to the possible mismatch in dimensionalities of the input space and the lattice (see e.g., Fig. 1.3). In practice, the map is represented in the input space in terms of neuron weights that are connected by straight lines, if the corresponding neurons are nearest neighbors in the lattice (e.g., see the left panel of Fig. 1.2 or Fig. 1.3). When the map is topology preserving, it can be used for visualizing the data distribution by projecting the original data points onto the map. The advantage of having a flexible map, compared to e.g., a plane specified by principal components analysis (PCA), is demonstrated in Fig. 1.4. We observe that the three classes are better separated with a topographic map than with PCA. The most popular learning algorithm for this architecture is the Self Organizing Map (SOM)), whence this architecture is often referred to as Kohonen’s model.5, 6
Fig. 1.1 (A) Willshaw-von der Malsburg model.
Two isomorphic, rectangular lattices of neurons are shown: one represents the input layer and the other the output layer. Neurons are represented by circles: filled circles denote active neurons (“winning” neurons); open circles denote inactive neurons. As a result of the weighted connections from the input to the output layer, the output neurons receive different inputs from the input layer. Two input neurons are labeled (i, j) as well as their corresponding output layer neurons (i′, j′). Neurons i and i′ are the only active neurons in their respective layers. (B) Kohonen model. The common input all neurons receive is directly represented in the input space, v ϵ V ⊆ Rd . The “winning” neuron is labeled as i*: its weight (vector) is the one that best matches the current input (vector).
Fig. 1.2 Topographic mapping in the Kohonen architecture. In the left panel, the topologypreserving map ѴA of the data manifold V ⊆ Rd (grey shaded area) is shown. The neuron weights wi,wj are connected by a straight line since the corresponding neurons i, j in the lattice A (right panel), with lattice coordinates ri, rj , are nearest neighbors. The forward mapping F is from the input space to the lattice; the backward mapping Φ is from the lattice to the input space. The learning algorithm tries to make neighboring lattice neurons, i, j, code for neighboring positions, wi,wj, in the input space. |
Fig. 1.3 Example of a one dimensional lattice consisting of four neurons i, j, k, l in a two dimensional rectangular space. The distance between the weight vectors of neurons i, j, di j, is larger than between that of neurons i, l, dil . This means that, at least in this example, neighboring neuron weights do not necessarily correspond to neighboring neurons in the lattice. |
Fig. 1.4 Oil flow data set visualized using PCA (left panel) and a topographic map (right panel).
The latter was obtained with the GTM algorithm. Since the GTM performs a nonlinear mapping, it is better able to separate the three types of flow configurations: laminar (red crosses), homogeneous (blue plusses) and annular (green circles).
1. SELF ORGANISING MAPS ALGORITHM:
The SOM algorithm distinguishes two stages: the competitive stage and the cooperative stage. In the first stage, the best matching neuron is selected, i.e., the “winner”, and in the second stage, the weights of the winner are adapted as well as those of its immediate lattice neighbors. We consider the minimum Euclidean distance version of the SOM algorithm.7
Competitive stage:
For each input v ∈ V, we select the neuron with the smallest Euclidean distance (“Winner-Takes-All”, WTA), which we call the “winner”:
By virtue of the minimum Euclidean distance rule, we obtain a Voronoi tessellation of the input space: to each neuron corresponds a region in the input space, the boundaries of which are perpendicular bisector planes of lines joining pairs of weight vectors (the grey shaded area in Fig. 1.5 is the Voronoi region of neuron j). Remember that the neuron weights are connected by straight lines (links or edges): they indicate which neurons are nearest neighbors in the lattice. These links are important for verifying whether the map is topology preserving.
Fig. 1.5 Definition of quantization region in the Self-Organizing Map (SOM).
Portion of a lattice (thick lines) plotted in terms of the weight vectors of neurons a, …, k, in the two-dimensional input space, i.e., wa, ….,wk.
Cooperative stage:
It is now crucial to the formation of topographically-ordered maps that the neuron weights are not modified independently of each other, but as topologically-related subsets on which similar kinds of weight updates are performed. During learning, not only the weight vector of the winning neuron is updated, but also those of its lattice neighbors and, thus, which end up responding to similar inputs. This is achieved with the neighborhood function, which is centered at the winning neuron, and decreases with the lattice distance to the winning neuron. 8
The weight update rule in incremental mode is given 9
With Λ the neighborhood function, i.e., a scalar-valued function of the lattice coordinates of neurons i and i*, ri and r*i , mostly a Gaussian:
With range σΛ (i.e., the standard deviation). (We further drop the parameter σΛ (t) from the neighborhood function to simplify our notation.)
Fig. 1.6 The effect of the neighborhood function in the SOM algorithm. Starting from a perfect arrangement of the weights of a square lattice (full lines), the weights nearest to the current input (indicated with the cross) receive the largest updates, those further away smaller updates, resulting in the updated lattice (dashed lines).
The positions ri are usually taken to be the nodes of a discrete lattice with a regular topology, usually a 2 dimensional square or rectangular lattice. An example of the effect of the neighborhood function in the weight updates is shown in Fig. 1.6 for a 4×4 lattice. The parameter σɅ , and usually also the learning rate Ʌƞ, are gradually decreased over time. When the neighborhood range vanishes, the previous learning rule reverts to standard unsupervised competitive learning (note that the latter is unable to form topology-preserving maps, pointing to the importance of the neighborhood function).
As an example, we train a 10×10 square lattice with the SOM algorithm on a uniform square distribution [−1,1]2, using a Gaussian neighborhood function of which the range σΛ (t) is decreased as follows:
With t the present time step, tmax the maximum number of time steps, and sL0 the range spanned by the neighborhood function at t = 0. We take tmax = 100,000 and sL0 = 5 and the learning rate h = 0.01. The initial weights (i.e., for t = 0) are chosen randomly from the same square distribution. Snapshots of the evolution of the lattice are shown in Fig. 1.7.We observe that the lattice is initially tangled, then contracts, unfolds, and expands so as to span the input distribution. This two-phased convergence process is an important property of the SOM algorithm and it has been thoroughly studied from a mathematical viewpoint in the following terms: 1) the topographic ordering of the weights and, thus, the formation of topology-preserving mappings, and 2) the convergence of these weights (energy function minimization). Both topics wil be discussed next. Finally, the astute reader has noticed that at the end of the learning phase, the lattice is smooth, but then suddenly becomes more erratic. This is an example of a phase transition, and it has been widely studied for the SOM algorithm.10
Finally, since the speed of convergence depends on the learning rate, also a version without one has been developed, called batch map and it leads to a faster convergence of the map.
1.1. Topographic ordering:
In the example of Fig. 1.7, we have used a two-dimensional square lattice for mapping a two-dimensional uniform, square distribution. We can also use the same lattice for mapping a non-square distribution, so that there is a topological mismatch, for example, a circular and an L-shaped distribution (Fig. 1.8A,B).We use the same lattice and simulation set-up as before but now we show only the final results. Consider first the circular distribution: the weight distribution is now somewhat nonuniform. For the L-shaped distribution, we see in addition that there are several neurons outside the support of the distribution, and some of them even have a zero (or very low) probability to be active: hence, they are often called “dead” units. It is hard to find a better solution for these neurons without clustering them near the inside corner of the L-shape. We can also explore the effect of a mismatch in lattice dimensionality. For example, we can develop a one-dimensional lattice (“chain”) in the same two-dimensional square distribution as before. (Note that it is now impossible to preserve all of the topology).We see that the chain tries to fill the available space as much as possible (Fig. 1.8C): the resulting map approximates the so-called space-filling Peano curve 11
Fig. 1.7 Evolution of a 10×10 lattice with a rectangular topology as a function of time.
The outer squares outline the uniform input distribution. The values given below the squares represent time.
Fig. 1.8 Mapping of a 10×10 neurons lattice onto a circular
(A) and an L-shaped (B) uniform distribution, and a 40 neurons one-dimensional lattice onto a square uniform distribution (C).
1.2. Topological defects:
As said before, the neighborhood function plays an important role in producing topographically-ordered lattices, however, this does not imply that we are guaranteed to obtain one. Indeed, if we decrease the neighborhood range too fast, then there could be topological defects.12, 13 These defects are difficult to iron out, if at all, when the neighborhood range vanishes. In the case of a chain, we can obtain a so-called kink (Fig. 1.9).
Consider, as a simulation example, a rectangular lattice sized N = 24×24 neurons with the input samples taken randomly from a two-dimensional uniform distribution p(v) within the square [0,1]2. The initial weight vectors are randomly drawn from this distribution. We now perform incremental learning and decrease the range as follows:
Fig. 1.9 Example of a topological defect (“kink”) in a chain consisting of four neurons i, j, k, l in a two dimensional rectangular space.
but now with t the present time step and tmax = 275,000. For the learning rate, we take h = 0.015. The evolution is shown in Fig. 1.10. The neighborhood range was too rapidly decreased since the lattice is twisted and, even if we would continue the simulation, with zero neighborhood range, the twist will not be removed.
2. Recurrent Topographic Maps:
Time series:
Many data sources such as speech have a temporal characteristic (e.g., a correlation structure) that cannot be sufficiently captured when ignoring the order in which the data points arrive, as in the original SOM algorithm. Several self-organizing map algorithms have been developed for dealing with sequential data, such as the ones using:
a. Fixed-length windows, e.g., the time-delayed SOM.14
b. Specific sequence metrics.15
c. Statistical modeling incorporating appropriate generative models for sequences.16
d. Mapping of temporal dependencies to spatial correlation, e.g., as in traveling wave signals or potentially trained, temporally activated lateral interactions.17
Recurrent processing of time signals and recurrent winning neuron computation based on the current input and the previous map activation, such as with the Temporal Kohonen map (TKM)18, the recurrent SOM (RSOM19, the recursive SOM (RecSOM)20, the SOM for structured data (SOMSD)21, and the Merge SOM (MSOM)22.
Several of these algorithms have been proposed recently, which shows the increased interest in representing time series with topographic maps. The recurrent algorithms essentially differ in the context, i.e., the way by which sequences are internally represented.
Fig. 1.10 Example of the formation of a topological defect called “twist” in a 24×24 lattice. The evolution is shown for different time instances (values below the squares).
Tree structures:
Binary trees, and also trees with limited fan-out k, have been successfully processed with the SOMSD and the MSOM by extending the neuron’s single context vector to several context vectors (one for each subtree). Starting from the leafs of a tree, the integrated distance ID of a tree with a given label and the k subtrees can be determined, and the context defined. The usual learning can then be applied to the weights and contexts. As a result of learning, a topographic mapping of trees according to their structure and labels arises. Up to now, only preliminary results of the capacities of these algorithms for tree structures have been obtained.
3. Kernel Topographic Maps:
Rather than developing topographic maps with disjoint and uniform activation regions , such as in the case of the SOM algorithm (Fig. 1.5), and its adapted versions, algorithms have been introduced that can accommodate neurons with overlapping activation regions, usually in the form of kernel functions, such as Gaussians (Fig. 1.11).
For these kernel-based topographic maps, or kernel topographic maps, as they are called (they are also sometimes called probabilistic topographic maps since they model the input density with a kernel mixture), several learning principles have been proposed.23 One motivation to use kernels is to improve, besides the biological relevance, the density estimation properties of topographic maps. In this way, we can combine the unique visualization properties of topographic maps with an improved modeling of clusters in the data. Usually, homoscedastic (equal-variance) Gaussian kernels are used, but also heteroscedastic (differing variances) Gaussian kernels and other kernel types have been adopted. In the next sections, we will review the kernel-based topographic map formation algorithms and mention a number of applications. The diversity in algorithms reflects the differences in strategies behind them. As a result, these algorithms have their specific strengths (and weaknesses) and, thus, their own application types.
Fig. 1. 11 Kernel-based topographic map.
Example of a 2 2 map (cf. rectangle with thick lines in v -space) for which each neuron has a Gaussian kernel as output function. Normally, a more condensed representation is used where, for each neuron, a circle is drawn with center the neuron weight vector and radius the kernel range.
4. Alternatives of SOM’s:
• The generative topographic map (GTM) is a potential alternative to SOMs. In the sense that a GTM explicitly requires a smooth and continuous mapping from the input space to the map space, it is topology preserving. However, in a practical sense, this measure of topological preservation is lacking.24
• The time adaptive self-organizing map (TASOM) network is an extension of the basic SOM. The TASOM employs adaptive learning rates and neighborhood functions. It also includes a scaling parameter to make the network invariant to scaling, translation and rotation of the input space. The TASOM and its variants have been used in several applications including adaptive clustering, multilevel thresholding, input space approximation, and active contour modeling.25 Moreover, a Binary Tree TASOM or BTASOM, resembling a binary natural tree having nodes composed of TASOM networks has been proposed where the number of its levels and the number of its nodes are adaptive with its environment.26
• The growing self-organizing map (GSOM) is a growing variant of the self-organizing map. The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually four) and grows new nodes on the boundary based on a heuristic. By using a value called the spread factor, the data analyst has the ability to control the growth of the GSOM.
• The elastic maps approach borrows from the spline interpolation the idea of minimization of the elastic energy. In learning, it minimizes the sum of quadratic bending and stretching energy with the least squares approximation error.
6. Extensions of SOM:
Although the original SOM algorithm has all the necessary ingredients for developing topographic maps, many adapted versions have emerged over the years. For some of these, the underlying motivation was to improve the original algorithm, or to extend its range of applications, while for others the SOM algorithm has served as a source of inspiration for developing new ways to perform topographic map formation.
One motivation was spurred by the need to develop a learning rule which performs gradient descent on an energy function. Another was to the occurrence of dead units, since they do not contribute to the representation of the input space (or the data manifold). Several researchers were inspired by Grossberg’s idea27 of adding a “conscience” to frequently winning neurons to feel “guilty” and to reduce their winning rates.
· A different strategy is to apply a competitive learning rule that minimizes the mean absolute error (MAE).
· Another evolution are the growing topographic map algorithms. In contrast with the original SOM algorithm, its growing map variants have a dynamically-defined topology, and they are believed to better capture the fine-structure of the input distribution. We will discuss them in the next section.
· Many input sources have a temporal characteristic, which is not captured by the original SOM algorithm. Several algorithms have been developed based on a recurrent processing of time signals and a recurrent winning neuron computation. Also tree structured data can be represented with such topographicmaps. We discussed the recurrent topographic maps in this study.
· Another important evolution is the kernel-based topographic maps: rather than Voronoi regions, the neurons are equipped with overlapping activation regions, usually in the form of kernel functions, such as Gaussians (Fig. 1.11). Also for this case, several algorithms have been developed, and we discussed the number of them in this study.
7. APPLICATIONS OF SOM:
1. Self-organizing map applications in meteorology.28, 29
a. Sea level pressure and geopotential height data
b. Air temperature, humidity, and wind data
c. Evaporation, precipitation and cloud data
2. Self-organizing map applications in oceanography.30, 31
a. Satellite ocean color and chlorophyll
b. In situ biological and geochemical data
c. Satellite sea surface temperature data
d. Satellite sea surface height data
e. Ocean current data from in situ observations and numerical models
f. Other oceanographic data: such as wind stress, sea floor shape, tusnami and salinity.
3. Vector quantization.32
4. Regression.33, 34
5. Clustering.35
8. Future developments:
An expected development is to go beyond the limitation of the current kernel-based topographic maps that the inputs need to be vectors. But in the area of structural pattern recognition, more powerful data structures can be processed, such as strings, trees and graphs. The SOM algorithm’s already been extended towards strings and graphs, which include strings and trees. The integration of these new types of kernels into kernel-based topographic maps is yet to be done, but could turn out to be a promising evolution for biochemical applications, such as visualizing and clustering sets of structure-based molecule descriptions.
9. CONCLUSION:
In this study we have introduced the Self-Organizing Map (SOM) algorithm, discussed its properties, limitations and application types, and reviewed a number of extensions, and other types of topographic map formation algorithms, such as the recurrent-, and the kernel topographic maps. We have also indicated how recent developments in topographic maps enable us to consider categorical data, time series and tree structured data, widening further the application field towards micro-array data analysis, document analysis and retrieval, exploratory analysis of web navigation sequences, and the visualization of protein structures and long DNA sequences.
10. REFERENCES:
1. Kohonen, T. Self-Organization and Associative Memory, Springer-Verlag, New York, Berlin, Heidelberg (1988).
2. Kohonen, T. (2001). Self-Organizing Maps. Springer-Verlag, New York, Berlin, Heidelberg. 2001.
3. Van Hulle, M.M. Faithful representations and topographic maps: From distortionto information-based self-organization, New York: Wiley. 2000.
4. Von der Malsburg, C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik. 14; 1973: 85-100.
5. Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern.43; 1982: 59-69.
6. Kohonen, T. Self-organization and associative memory. Heidelberg: Springer. 1984.
7. Kohonen, T. Self-organizing maps. Heidelberg: Springer. 1995.
8. Ahalt, S.C., Krishnamurthy, A.K., Chen, P., and Melton, D.E. Competitive learning algorithms for vector quantization. Neural Networks. 3; 1990: 277-290.
9. Alahakoon, D., Halgamuge, S.K., and Srinivasan, B. Dynamic Self Organising Maps with Controlled Growth for Knowledge Discovery. IEEE Transactions on Neural Networks (Special Issue on Knowledge Discovery and Data Mining). 11(3); 2000: 601-614.
10. Der, R., and Herrmann, M. Phase transitions in self-organizing feature maps. Proc. ICANN’93 (Amsterdam, The Netherlands), New York: Springer.1993: 597-600.
11. Kohonen, T. Self-organizing maps. Heidelberg: Springer. 1997.
12. Geszti, T. Physical models of neural networks. Singapore: World Scientific Press. 1990.
13. Heskes, T.M., and Kappen, B. Error potentials for self-organization. Proc. IEEE Int. Conf. on Neural Networks (San Francisco, 1993). 1993: 1219-1223.
14. Kangas, J. Time-delayed self-organizing maps. In Proc. IEEE/INNS Int. Joint Conf. on Neural Networks. 2; 1990: 331-336.
15. Somervuo, P.J. Online algorithm for the self-organizing map of symbol strings. Neural Networks. 17(8-9); 2004: 1231-1240.
16. Bishop, C.M., Hinton, G.E., and Strachan, I.G.D. In Proceedings IEE Fifth International Conference on Artificial Neural Networks, Cambridge (U.K.). 1997: 111-116.
17. Euliano, N.R., and Principe, J.C. A spatiotemporal memory based on SOMs with activity diffusion. In Kohonen Maps, E. Oja and S. Kaski (eds.), Elsevier. 1999: 253-266.
18. Chappell, G. and Taylor, J. The temporal Kohonen map. Neural Networks 6; 1993:441-445.
19. Koskela, T., Varsta, M., Heikkonen, J., and Kaski, K. Recurrent SOM with local linear models in time series prediction. In Proc. 6th European Symposium on Artificial Neural Networks (ESANN 1998). 1998:pp. 167-172.
20. Voegtlin, T. (2002). Recursive self-organizing maps. Neural Networks. 15(8-9);2002: 979-992.
21. Hagenbuchner, M., Sperduti, A., and Tsoi, A.C. A Self-Organizing Map for Adaptive Processing of Structured Data. IEEE Transactions on Neural Networks. 14(3); 2003: 491-505.
22. Strickert, M., and Hammer, B. Merge SOM for temporal data. Neurocomputing. 64;2005: 39-72.
23. Van Hulle,M.M. Kernel-based topographic maps: Theory and applications. Encyclopedia of Computer Science and Engineering, Benjamin W. Wah (Ed.), in press. 2009.
24. Kaski, S.. Data exploration using self-organizing maps, Acta Polytechnica Scandinavica, Mathematics. Computing and Management in Engineering. 82; 1997:57.
25. Hamed Shah-Hosseini and Reza Safabakhsh. TASOM: A New Time Adaptive Self-Organizing Map, IEEE Transactions on Systems, Man, And Cybernetics—Part B: Cybernetics. Vol. 33, No. 2; 2003: 271-282.
26. Hamed Shah-Hosseini. Binary tree time adaptive self-organizing map. Neurocomputing. Vol. 74, No. 11;2011: 1823-1839.
27. Grossberg, S. Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectors. Biol. Cybern. 23;1976: 121-134.
28. Hewitson, B. C. and Crane, R. G. Neural Nets: Applications in Geography, Springer, New York. 1994.
29. Hewitson, B. C. and Crane, R. G. Self-organizing maps: applications to synoptic climatology. Climate Research.Vol. 22, No. 1; 2002: 13-26.
30. Richardson, A. J.; Risien, C. and Shillington, F. A. Using self-organizing maps to identify patterns in satellite imagery. Progress in Oceanography.Vol. 59, No. 2-3; 2003: 223- 239.
31. Kropp, J. and Klenke, T. Phenomenological pattern recognition in the dynamical structures of tidal sediments from the German Wadden Sea. Ecological Modelling. Vol. 103, No. 2-3; 1997: 151-170.
32. Gersho, A., and Gray, R.M. Vector quantization and signal compression. Boston, Dordrecht, London: Kluwer.1991.
33. Ritter, H., Martinetz, T., and Schulten, K. Neural computation and self-organizing maps: An introduction. Reading, MA: Addison-Wesley.1992.
34. Mulier, F., and Cherkassky, V. Self-organization as an iterative kernel smoothing process. Neural Computat.7; 1995:1165-1177.
35. Ritter, H., and Kohonen, T. Self-organizing semantic maps. Biological Cybernetics. 61; 1989: 241-254.
Received on 25.06.2012 Accepted on 30.07.2012
©A&V Publications all right reserved
Research J. Engineering and Tech. 3(4): Oct-Dec. 2012 page 319-326