Skip to main content


Introduction to Neural Networks

Issue Abstract

Abstract
The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of ‘intelligent’ agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional ‘cause and effect’ philosophy. Eur J Gastroenterol Hepatol 19:1046–1054
Ⓧc 2007 Wolters Kluwer Health | Lippincott Williams &
Wilkins.


Author Information
M Lavanya
Issue No
3
Volume No
6
Issue Publish Date
05 Mar 2024
Issue Pages
88-101

Issue References

References
K of special interest
KK of outstanding interest
1 McCulloch WS, Pitts WH. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 1943; 5:115–133.
KK The first mathematical model of logical functioning of brain cortex (formal neuron) is exposed in the famous work of McCulloch and Pitts.
2 McClelland JL, Rumelhart DE, editors. Explorations in parallel distributed processing. Cambridge, Massachusetts: MIT Press; 1986.
KK This book is the historical reference text on the neurocomputing origins, which contains a comprehensive compilation of neural network theories and research.
3 Anderson JD, Rosenfeld E, editors. Neurocomputing: foundations of research. Cambridge, Massachusetts: MIT Press; 1988.
K An interesting book that collects the most effective works on the development of neural networks theory.
4 Hebb DO. The organization of behavior. New York: Wiley; 1949.
KK In this landmark book is developed the concept of the ‘cell assembly’ and explained how the strengthening of synapses might be a mechanism of learning.
5 Marr D. Approaches to biological information processing. Science 1975;
190:875–876.
K In this article, Marr, writing about his theoretical studies on neural networks, expanded such original hypotheses.
6 Rosenblatt F. The Perceptron. A probabilistic model for information storage and organization in the brain. Psychol Rev 1958; 65:386–408.
KK The first neural network learning by its own errors is developed in this hystorical work. 7 Widrow G, Hoff ME. Adaptive switching circuits. Institute of radio engineers, Western Electronic show & Convention, Convention record. 1960,
part 4:96–104.
8 Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL, editors. Parallel distributed processing. Vol. I. Boston: MIT Press; 1986. pp. 318–362.
9 Personnaz L, Guyon I, Dreyfus G. Collective computational properties of neural networks: new learning mechanisms. Phys Rev A 1986; 34: 4217–4228.
10 Gallant SI. Perceptron-based learning algorithms. IEEE Transaction on Neural Networks 1990; 1:179–192.
K An important paper that describes the main learning laws for training the neural networks models.
11 Wasserman PD. Neural computing: theory and practice. New York: Van Nostrand; 1989.
12 Aleksander I, Morton H. An introduction to neural computing. London: Chapman & Hall; 1990.
Two books containing systematic expositions of several neural networks models.
13 Fahlman SE. An empirical study of learning speed in back-propagation networks technical report CMU-CS-88-162. Pittsburg: Carnegie-Mellon University; 1988.
14 Le Cun Y. Generalization and network design strategie. In: Pfeifer R, Schreter Z, Fogelman-Soulie F, Steels L, editors. Connectionism in perspective. North Holland: Amsterdam; 1989. pp. 143–156.
K The performances and possible generalizations of Back-Propagation Algorithm are described by Fahlman and Le Cun.
15 Hinton GE. How neural networks learn from experience. Sci Am 1992;
267:144–151.
In this article there is a brief and more accessible introduction to Connectionism.
16 Matsumoto G. Neurocomputing. Neurons as microcomputers. Future Gen comp 1988; 4:39–51.
K The Matsumoto article is a concise and interesting review on Neural Networks.
17 NeuralWare. Neural computing. Pittsburgh, Pennsylvania: NeuralWare Inc.; 1993.
18 CLEMENTINE user manual. Integral Solutions Limited; 1997.
19 Von der Malsburg C. Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 1973; 14:85–100.
20 Willshaw DJ, Von der Malsburg C. How patterned neural connection can be set up by Self-Organization. Proc R Soc London B 1976; 94: 431–445.
K The early network model that performs self-organization processes has been exposed in papers from Von der Malsburg and Willshaw.
21 Kohonen T. Self-organization and associative memories. Berlin-Heidelberg- New York: Springer; 1984.
22 Kohonen T. The self-organizing map. Proceedings IEEE 1990; 78: 1464–1480.
KK The most well-known and simplest self-organizing network model has been proposed by T. Kohonen.
23 Carpenter GA, Grossberg S. The ART of adaptive pattern recognition by a self-organizing neural network. Computer 1988; 21:77–88. 

24 Carpenter GA, Grossberg S. A massively parallel architecture for a self-organizing neural pattern recognition machine. In: Carpenter GA, Grossberg S, editors. Pattern recognition by self-organizing neural networks. Cambridge, MA: MIT Press; 1991.
K These works of Grossberg and Carpenter are very interesting contributions regarding the competitive learning paradigm.
25 Dietterich TG. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 1998; 7:1895–1924.
KK This article describes one of the most popular validation protocol, the 5 × 2 cross validation.
26 Buscema M. Genetic doping algorithm (GenD): theory and applications.
Exp Syst 2004; 21:63–79.
K A seminal paper on the theory of evolutionary algorithms.
27 Buscema M, Grossi E, Intraligi M, Garbagna N, Andriulli A, Breda M. An optimized experimental protocol based on neuro-evolutionary algorithms: application to the classification of dyspeptic patients and to the prediction of the effectiveness of their treatment. Artificial Intelligence Med 2005; 34:279–305.
KK A complex work that used techniques based on advanced neuro/evolutionary systems (NESs) such as Genetic Doping Algorithm (GenD), input selection (IS) and training and testing (T&T) systems to perform the discrimination between functional and organic dyspepsia and also the prediction of the outcome in dyspeptic patients subjected to Helicobacter pylori eradication therapy.
28 Andriulli A, Grossi E, Buscema M, Festa V, Intraligi M, Dominici PR, et al. Contribution of artificial neural networks to the classification and treatment of patients with uninvestigated dyspepsia. Digest Liver Dis 2003; 35: 222–231.
K A paper assessing the efficacy of neural networks to perform the diagnosis of gastro-oesophageal reflux disease (GORD). The highest predictive ANN’s performance reached an accuracy of 100% in identifying the correct diagnosis; this kind of data processing technique seems to be a promising approach for developing non-invasive diagnostic methods in patients suffering of GORD symptoms.
29 Pagano N, Buscema M, Grossi E, Intraligi M, Massini G, Salacone P, et al. Artificial neural networks for the prediction of diabetes mellitus occurrence in patients affected by chronic pancreatitis. J Pancreas 2004;
5 (Suppl 5):405–453.
K In this work several research protocols based on supervised neural networks are used to identify the variables related to diabetes mellitus in patients affected by chronic pancreatitis and presence of diabetes was predicted with an accuracy higher than 92% in single patients with this disease.
30 Sato F, Shimada Y, Selaru FM, Shibata D, Maeda M, Watanabe G, et al. Prediction of survival in patients with esophageal carcinoma using artificial neural networks. Cancer 2005; 103:1596–1605.