Wednesday, 3 October 2007

02#ARTIFICIAL NEURAL NETWORKS BASED DEVNAGRI NUMERAL RECOGNITION

ABSTRACT:

In the present paper the proposed system is self organizing map method used to recognize the Devanagiri characters. The som is trained for hundred handwritten devangri characters. The som is trained for hundred hand written devanagari numerals. The Network was tested for different parameters such as o/p nodes, neighborhood size, no of cycle. The process of training was repeated for number of times.

INTRODUCTION:

We begin by considering an Artificial neural n/w architecture in which every node is connected to every node and these connect as are either excitatory or inhibitory or irrelevant.

A single node is insufficient for many patricidal problems and large number of nodes is frequently used. The way nodes are connected determines how computations proceed and constitutes an important early design decision by a neural network developer. A brief discussion of biological neural networks is relevant, prior to examining artificial neural network architectures.

Different parts of the central nervous system are structured differently hence incorrect to claim that a single architecture models all neural processing. The cerebral cortex, where most processing is believed to occur, consists of five to seven layers of neurons with each layer supplying inputs to the next. However, layer boundaries are not strict and connections that cross layers are known to exit. Feedback pathways are also known to exists, e.g. between (to and fro) the visual context and the lateral genetical nucleus. Each neuron is connected with many, but not all, of the neighboring neurons within the some “veto” neurons that have overwhelming power of neutralizing the effects of a large number of excitatory inputs to a neuron. Some amount of indirect self-excitation also occurs – one node’s activation excites its neighbor, which excites the first again. In the following sub sections, we discuss artificial neural network architectures, some of which derive inspiration from biological neural networks

No comments: