nmODE
neural memory ordinary differential equation
Article Ecrit par: Yi, Zhang ;
Résumé: Brain neural networks are regarded as dynamical systems in neural science, in which memories are interpreted as attractors of the systems. Mathematically, ordinary differential equations (ODEs) can be utilized to describe dynamical systems. Any ODE that is employed to describe the dynamics of a neural network can be called a neuralODE. Inspired by rethinking the nonlinear representation ability of existing artificial neural networks together with the functions of columns in the neocortex, this paper proposes a theory of memory-based neuralODE, which is composed of two novel artificial neural network models: nmODE and -net, and two learning algorithms: nmLA and -LA. The nmODE (neural memory Ordinary Differential Equation) is designed with a special structure that separates learning neurons from memory neurons, making its dynamics clear. Given any external input, the nmODE possesses the global attractor property and is thus embedded with a memory mechanism. The nmODE establishes a nonlinear mapping from the external input to its associated attractor and does not have the problem of learning features homeomorphic to the input data space, as occurs frequently in most existing neuralODEs. The nmLA (neural memory Learning Algorithm) is developed by proposing an interesting three-dimensional inverse ODE (invODE) and has advantages in memory and parameter efficiency. The proposed -net is a discrete version of the nmODE, which is particularly feasible for digital computing. The proposed -LA ( learning algorithm) requires no prior knowledge of the number of network layers. Both nmLA and -LA have no problem with gradient vanishing. Experimental results show that the proposed theory is comparable to state-of-the-art methods.
Langue:
Anglais