Quantcast

Gaussian adaptation as a model of a brain

To see a possible connection between Gaussian adaptation, GA, and a brain we may first look at a 2-dimensional computer simulation of the process according to the figure below. It relies on the assumption that neuron kernels may add, synapses may multiply and axons may delay signal values (in accordance with the theory of digital filters). Independent Gaussian distributed signal values g1 and g2 are supposed to be generated in the STEM-box to the left. These values are multiplied by the w-coefficients in the triangular synapses and summed in the neuron kernels (the squares). Finally, the signal mean values, m, are added and a test for feasibility is made in the CORTEX-box to the right.
http://picasaweb.google.com/gregor744/GA_figures02?authkey=Gv1sRgCNLYgpOK2ZH_sQE#5392026263456382658
The process may be extended to any number of dimensions. If a signal pattern is feasible, the mean m and the moment matrix, M (indirectly by w-coefficients), are updated according to the matrix formula (1) below, which makes it possible to fulfill the condition of optimality (maximum average information) of the process according to theorem 6c in the blog “Gaussian adaptation as a model of evolution”.

The adaptation of the centre of gravity m may be done by one sample (individual signal pattern) at a time, for example
m(i+1) = (1 – a) m(i) + ax
where x = (x1, x2, …, xn) is an acceptable pattern, and a < 1 a suitable constant so that the inverse of a represents the number of individuals in the population (represented by the Gaussian distribution).

M may in principle be updated after every step y leading to a feasible signal pattern
x = m + y according to:
M(i+1) = (1 – 2b) M(i) + 2byy’, where y’ is the transpose of y,
and where b < 1 is used to increase average information at a suitable rate. But M will never be used in the calculations. Instead the matrix W is used defined by WW’ = M.
Thus, we have y = Wg, where g is Gaussian distributed with the moment matrix mU, and U is the unit matrix. W and W’ may be updated by the formulas:
W = (1 – b)W + byg and W’ = (1 – b)W’ + bgy’
because multiplication gives
M = (1 – 2b)M + 2byy’,
where terms including b^2 have been neglected. Thus, M will be indirectly adapted with good approximation. In practice it will suffice to update W only
W(i+1) = (1 – b)W(i) + byg’. (1)

This formula is used in the GA neural network model of the evolution of signal patterns in a brain. A closer look at the formula reveals that GA fairly well satisfies the Hebbian rule of associative learning stating that synaptic transmission between neurons is strengthened (w-coefficients are increased) if the neurons are simultaneously active while the system is in a state of well-being. Otherwise the transmission may be weakened. See also Kjellström, 1999, in references
http://en.wikipedia.org/wiki/gaussian_adaptation#references

Gkm




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.