Need help?

800-5315-2751 Hours: 8am-5pm PST M-Th;  8am-4pm PST Fri
Medicine Lakex
medicinelakex1.com
/m/mathtree.ru1.html
 

Main.dvi

A General Mechanism forTuning: Gain Control Circuitsand Synapses Underlie Tuningof Cortical Neurons Minjoon Kouh and Tomaso Poggio Tuning to an optimal stimulus is a widespread property of neurons in cortex. We propose that such tuningis a consequence of normalization or gain control circuits. We also present a biologically plausible neuralcircuitry of tuning.
Massachusetts Institute of Technology, 2004 This report describes research done at the Center for Biological & Computational Learning, which is in the McGovern Institutefor Brain Research at MIT, as well as in the Dept. of Brain & Cognitive Sciences, and which is affiliated with the ComputerSciences & Artificial Intelligence Laboratory (CSAIL).
This research was sponsored by grants from: Office of Naval Research (DARPA) Contract No. MDA972-04-1-0037, Office of Naval Research (DARPA) Contract No. N00014-02-1-0915, National Science Foundation (ITR/IM) Contract No. IIS-0085836,National Science Foundation (ITR/SYS) Contract No. IIS-0112991, National Science Foundation (ITR) Contract No. IIS-0209289,National Science Foundation-NIH (CRCNS) Contract No. EIA-0218693, National Science Foundation-NIH (CRCNS) ContractNo. EIA-0218506, and National Institutes of Health (Conte) Contract No. 1 P20 MH66239-01A1.
Additional support was provided by: Central Research Institute of Electric Power Industry, Center for e-Business (MIT), Daimler-Chrysler AG, Compaq/Digital Equipment Corporation, Eastman Kodak Company, Honda R&D Co., Ltd., ITRI, Ko-matsu Ltd., Eugene McDermott Foundation, Merrill-Lynch, Mitsubishi Corporation, NEC Fund, Nippon Telegraph & Telephone,Oxygen, Siemens Corporate Research, Inc., Sony MOU, Sumitomo Metal Industries, Toyota Motor Corporation, and WatchVi-sion Co., Ltd.
tuning seems to be of a more abstract nature (e.g., thecomplex shape tuning in IT) would require a more gen- Across the cortex, especially in the sensory areas, many eral mechanism.
neurons respond strongly to some stimuli, but weakly In this paper, we propose a biophysically plausible to others, as if they were tuned to some optimal fea- solution to the puzzle of Gaussian-like tuning.
tures or to particular input patterns. For example, neu-rons in primary visual cortex show Gaussian-like tun- A general mechanism for cortical tuning
ing in multiple dimensions, such as orientation, spatialfrequency, direction, velocity. Moving further along the As mentioned in the introduction, many neurons show ventral pathway of primate cortex, V4 neurons show tuning, which is often described in terms of a multidi- tuned responses to different types of gratings or con- mensional Gaussian function: tour features [8, 15], and some IT neurons are respon-sive to a particular view of a face or other objects The key operation in Eqn. 1 is In other sensory modalities, neural tuning is also w 2, the computation of similarity between two vectors, which determines a common. Olfactory neurons in the fly respond to par- tuned response around a target vector ticular mixtures of molecules, or odors [25]. Auditory do not have any obvious neural circuitry or biophys- neurons of a song bird can be tuned to sound patterns, ical mechanism for such operation. How, then, could or motif [9]. In the case of the motor system, the activity Gaussian-like tuning arise in cortex? of a spinal cord neuron is related to a particular pat- One possible answer to this puzzle is hinted by the tern of force fields or limb movements [16]. The tuning following mathematical identity, which relates the Eu- of a neuron may be sharp and sparse in some cases, or clidean distance measure, which appears in the Gaus- distributed and general in other cases [12], but despite sian function, with the normalized scalar product: qualitative differences, such tuning behavior seems tobe one of the major computational strategies for repre- w 2, if x = 1.
senting and encoding information in cortex.
Consequently, tuning in cortex is often characterized In other words, the similarity between two normalized and approximated with a multidimensional Gaussian w, can be measured with a Euclidean dis- function in many models. In [15], contour feature tun- tance as well as a scalar product, or the angle between ing in V4 is fitted with a Gaussian function in curvature two vectors. Hence, Eqn. 2 suggests that Gaussian-like and angular position space. In [2], a similar Gaussian tuning can arise from a normalized scalar product op- function is used to characterize the response of the af- ferent cells to the IT neurons. In the model of visual ob- The advantage of considering normalized scalar ject recognition by Riesenhuber and Poggio, which at- product as a tuning operation is its biophysical plau- tempts to describe quantitatively the first few hundred sibility. Unlike the computation of Euclidean distance milliseconds of visual recognition, the Gaussian func- or a Gaussian function, both normalization and scalar tion is one of the two key operations for providing se- product operations can be readily implemented with a lectivity [19].
network of neurons. The scalar product or the weighted Even though Gaussian-like tuning behavior in cortex sum can be computed by the dendritic inputs to a is widely acknowledged, it remains a major puzzle in cell with different synaptic weights. The normaliza- neuroscience: how could such multidimensional tun- tion across the inputs can be achieved by a divisive ing be implemented by neurons? The underlying bio- gain control mechanism involving inhibitory interac- physical mechanism is not understood. In Hubel and tions [3, 4, 10, 18]. The neural response may be subject Wiesel's model of V1, the tuning properties of simple to extra nonlinearities, such as sigmoid or rectification, and complex cells are explained in terms of the geom- etry of the afferents: for simple cells, the alignment of This relationship was pointed out by Maruyama, Girosi several non-oriented LGN afferents would give rise to and Poggio in [14], where the connection between the mul-tilayer perceptron and the neural network with radial basis the orientation selectivity (see [7] for a review, and [21] function is explored. Their analysis is based on the exact form for a quantitative model). Although attractively sim- of this identity (i.e., the input x to the Euclidean distance is ple and intuitive, this explanation is challenged by a normalized as well as the input to the scalar product). In this competing theory that maintains orientation selectivity paper, we examine a looser connection between the Euclidean is enforced, if not created, by the recurrent neural cir- distance and the normalized scalar product (i.e., the input tothe Euclidean distance is not, but the input to the scalar prod- cuitry within V1 [1, 6, 23]. The tuning along non-spatial uct is normalized): dimensions such as velocity or color, however, can notrely on the geometric arrangements only. Furthermore, tuning in other sensory modalities (e.g., auditory or ol- factory neurons) and in higher visual areas where the in the soma or the axon. Together, the normalized scalar mechanism to explain the contrast-dependent, saturat- product with a nonlinear transfer function can give rise ing neural responses in primary visual cortex [3, 4, 10] to a Gaussian-like tuning function, as shown in the next and center-surround effects within receptive field [5]. In [22], similar divisive normalization scheme was shown While many neurophysiological experiments have to increase independence of neural signals, despite the found tuning behaviors in cortex, theoretical studies dependencies in image statistics.
[16, 17] also indicate that a network based on radial ba- Fig. 1a presents one simple and plausible neural cir- sis functions (like Gaussian) indeed is a plausible com- cuitry for divisive normalization. This circuit is based putational scheme capable of learning. Here, learning is on Heeger's model of gain control in simple cells, where defined as a capacity for generalizing an input-output the inhibitory (possibly of shunting type) feedback con- mapping from a finite number of data. In a learning nections perform the pool normalization [10]. With a neural network with radial basis functions, the "hid- certain choice of nonlinearities, this model has a steady den" units show Gaussian-like tuning behavior to the state solution that is close to Eqn. 3. The normaliza- input. More concretely, a computational model with a tion is "close enough" in the sense that the denominator network of Gaussian template-matching units is shown may contain a constant (related to the strength of shunt- to be capable of performing object recognition, while re- ing inhibition) or the nonlinearity may not exactly be producing the shape selectivity and invariance proper- the square root of summed squares (see Appendix A).
ties of IT neurons [19, 20].
Another crucial operation for tuning according to Eqn. 2 is the scalar product, which can be directly ac- One plausible neural circuit for tuning
complished by the synapses (neglecting dynamics). In x · w corresponds to a vector of synaptic weights, and Eqn. 2 suggests that a crucial element for tuning is the x to the presynaptic inputs, as shown in Fig. 1b.
normalization, which can be expressed mathematically Combined together, the circuits in Fig. 1 are the basic elements for a network that can compute normalized scalar product, which in turn would produce tuning be- havior in a general multidimensional input space.
Eqn. 3 can be implemented by a pool of neurons N , Comparison between the Gaussian
whose individual responses are divisively normalized function and the normalized scalar
by the collective response across the pool, giving rise to the following two important properties.
In this section, two different representations of tuning 1. Individual neural response is normalized:
are compared. One is the Gaussian function, based on response of each neuron is divided by the total Euclidean distance measure, and the other is based on response of the pool that includes other neurons normalized scalar product (NSP). They are related to as well as itself. The normalization factor is al- each other by Eqn. 2, and we show that both forms ways greater than the numerator in Eqn. 3. Hence, of tuning are qualitatively equivalent and can be made the neural response is upper bounded and oper- ates within a well-defined dynamic range (i.e., Ri ∈ Mathematically, the Gaussian tuning function and the normalized scalar product with a sigmoid nonlin-earity are represented as 2. Collective response across the pool is normalized:
The sum of neural responses within the normaliza- tion pool is also normalized (i.e., R of as the ith component of a normalized vector R = 1). This aspect of normalization re- 1 + e−α( x· w ceived less attention in the past, but it may be the The sigmoid is a commonly-used transfer function for underlying mechanism for cortical tuning, which is modeling the relationship between the presynaptic and the focus of this paper.
postsynaptic activations or membrane depolarizations How would a network of neurons accomplish such in neurons. It sharpens the tuning behavior created by divisive normalization across the pool? In the past, sev- normalized scalar product and allows a better approx- eral plausible neural circuits for gain control mecha- imation of the Gaussian function, as the parameters α nism have been proposed and explored in various con- and β are adjusted.† texts. [18] considered forward and recurrent shunting †In our simulation, a nonlinear fitting routine (nlinfit in inhibition circuits for gain control within fly's visual Matlab) was used to find the best α and β with fixed c = 0.1 system. Many researchers have used the normalization in RNSP for a given Figure 1: Simple neural circuits for: (a) Divisive normalization, y = x based on [10]. This circuit is just one possibility. Divisive normalization may be computed alternatively by feedforward (instead of feedback)inhibition at the dendrites as in [18]. (b) Scalar product, y = P x w. (c) Normalized scalar product. This circuit can produce Gaussian-like tuning.
w specifies the center of the Gaussian larger receptive field and to increase feature complex- function in a multidimensional space. The Gaussian ity, the neurons may be pooling from many afferents width σ determines the sharpness or sensitivity of tun- covering different parts of receptive fields. The affer- ing (σ need not be the same along different dimensions).
ent cells within the pool would interact via normal- w specifies the direction of the feature vector ization operation, whose interaction may appear as a along which the response is maximal, and the param- center-surround effect as observed in V1 [5]. If indeed eters α and β determine the sharpness of the tuning.
a general mechanism for tuning, normalization would In both cases, the response is maximal if the input x is be present in other cortical areas, where similar center- matched to the target surround or interference effects may be observable.
Fig. 2 shows a few direct comparisons between The effects of normalization may also appear when- RGauss and RNSP . Although not identical, RNSP and ever the response of one afferent in the normalization RGauss exhibit comparable tuning behaviors. Because pool is modulated (for example, an attentional mecha- of the normalization, the dimensionality of RNSP is nism through a feedback connection). Change in one one less than that of RGauss. With the same number neuron's response may affect not only the output of the of afferents n, the Gaussian tuning function may be network, but the response of other afferent neurons in centered at any points in Rn, whereas the normalized the normalization pool.
scalar product is tuned to the direction of the vector in We also note that this scheme for cortical tuning has Sn or Rn−1. An obvious way of avoiding such limi- implications for learning and memory, which would be tation is to assume a constant dummy input and to in- accomplished by adjusting the synaptic weights accord- crease the dimensionality of the input vector, which was ing to the activation patterns of the afferent cells.
the approach taken here as in [14]. Then, the normal-ized scalar product may be tuned to any arbitrary vec- Interestingly, similar neural circuits may be involved w, just like the Gaussian function (see Appendix B in increasing the invariance properties of neurons. It for more discussions on this issue).
has been observed that IT neurons show certain degreeof translation and scale invariance [2, 13], and so do the V4 neurons [15]. One way of producing invarianceis the maximum operation, whose approximate imple- In this paper, we described how the normalized scalar mentation may involve a form of pool normalization product can account for the tuning of neural responses.
[26]. A computational model [19, 20] has shown that We also sketched a plausible neural circuit.
Gaussian-like tuning and maximum operations were The normalization for tuning provides some new in- sufficient to capture object recognition processing in vi- sights and predictions. For example, along the ventral sual cortex. We claim here that similar inhibitory neural pathway of primate visual cortex, the receptive field circuits with different nonlinearities (see Appendix C) size on average increases, and neurons show tuning to may accomplish both operations.
increasingly complex features [11]. In order to build a In the past, various neural micro-circuits have been


(a) Comparison in 1-dimension (c) Comparison in higher dimensions (d = 5, 10, 20) y = e−(x−1) /2⋅0.2 y = e−(x−0.5) /2⋅0.2 (b) Comparison in 2-dimension Figure 2: Comparison of RGauss and RNSP in several dimensions. Note that in all cases, the Gaussian tuningfunction can be approximated by the normalized scalar product followed by a sigmoid nonlinearity. The parametersα and β in the sigmoid are found with nonlinear fitting, while c was fixed at 0.1. As pointed out in Appendix B,a dummy input was introduced to obtain tuning to an arbitrary w (i.e., RNSP is in Sn+1). (a) Comparison in 1- dimension: RGauss (black) with σ = 0.2 and RNSP (red) are shown for w = 1 (left) and w = 0.5 (right). (b) Similarcomparisons in 2-dimension: w = (1, 1) (top) and w = (0.4, 0.6) (bottom). (c) Comparisons in higher dimensions.
Since the visualization of the entire function is difficult for high dimensions, 1000 random points are sampled fromthe space. The same nonlinear fitting routine was used to find the parameters in RNSP . The width of Gaussian is scaled according to σ = 0.2 d, where d is the dimensionality.
proposed to implement a normalization operation. The vision by neurons in primate visual cortex. Science, motivation was to account for gain control. We make here the new proposal that another role for normalizing [4] M. Carandini, D. Heeger, and J. Movshon. Lin- local circuits in brain is to provide the key step for mul- earity and normalization in simple cells of the tidimensional, Gaussian-like tuning. In fact this may Macaque primary visual cortex. Journal of Neuro- be the main reason for the widespread presence of gain science, 17(21):8621–8644, 1997.
control circuits in cortex where tuning to optimal stim-uli is a common property.
[5] J. Cavanaugh, W. Bair, and J. Movshon. Nature and interaction of signals from the receptive field cen- ter and surround in macaque V1 neurons. Journalof Neurophysiology, 88(5):2530–2546, 2002.
[1] R. Ben-Yishai, R. Bar-Or, and H. Sompolinsky.
[6] P. Dayan and L. Abbott. Theoretical Neuroscience.
Theory of orientation tuning in visual cortex.
MIT Press, 2001.
Proc. Natl. Acad. Sci. USA, 92:3844–3848, 1995.
[7] D. Ferster and K. Miller. Neural mechanisms of [2] S. Brincat and C. Connor. Underlying principles of orientation selectivity in the visual cortex. Annual visual shape selectivity in posterior inferotemporal Reviews Neuroscience, 23:441–471, 2000.
cortex. Nature Neuroscience, 7:880–886, 2004.
[8] J. Gallant, C. Connor, S Rakshit, J. Lewis, and [3] M. Carandini and D. Heeger. Summation and di- D. Van Essen.
Neural responses to polar, hy- perbolic, and Cartesian gratings in area V4 of tical simple cells. Journal of Neuroscience, 15:5448– the macaque monkey. Journal of Neurophysiology, [24] V. Torre and T. Poggio. A synaptic mechanism pos- [9] R. Hahnloser, A. Kozhevnikov, and M. Fee. An sibly underlying directional selectivity to motion.
ultra-sparse code underlies the generation of neu- Proc. R. Soc. Lond. B, 202:409–416, 1978.
ral sequences in a songbird.
[25] R. Wilson, G. Turner, and G. Laurent. Transforma- tion of olfactory representations in the drosophila [10] D. Heeger. Modeling simple-cell direction selec- antennal lobe. Science, 303:366–370, 2004.
tivity with normalized, half-squared, linear opera- [26] A. Yu, M. Giese, and T. Poggio. Biophysiologically tors. Journal of Neurophysiology, 70:1885–1898, 1993.
plausible implementations of the maximum oper- [11] E. Kobatake and K. Tanaka. Neuronal selectivi- ation. Neural Computation, 14(12):2857–2881, 2002.
ties to complex object features in the ventral visualpathway of the macaque cerebral cortex. Journal ofNeurophysiology, 71:856–867, 1994.
[12] G. Kreiman. Neural coding: computational and biophysical perspectives. Physics of Life Reviews,2:71–102, 2004.
[13] N. Logothetis, J. Pauls, and T. Poggio. Shape rep- resentation in the inferior temporal cortex of mon-keys. Current Biology, 5:552–563, 1995.
[14] M. Maruyama, F. Girosi, and T. Poggio. A connec- tion between GRBF ane MLP. MIT AI Memo, 1291,1992.
[15] A. Pasupathy and C. Connor. Shape representation in area V4: Position–specific tuning for boundaryconformation. Journal of Neurophysiology, 86:2505–2519, 2001.
[16] T. Poggio and E. Bizzi. Generalization in vision and motor control. Nature, 431:768–774, 2004.
[17] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer net-works. Science, 247:978–982, 1990.
[18] W. Reichardt, T. Poggio, and K. Hausen. Figure- ground discrimination by relative movement inthe visual system of the fly - II: Towards the neuralcircuitry. Biological Cybernetics, 46:1–30, 1983.
[19] M. Riesenhuber and T. Poggio. Hierarchical mod- els of object recognition in cortex. Nature Neuro-science, 2:1019–1025, 1999.
[20] M. Riesenhuber and T. Poggio. How visual cortex recognizes objects: The tale of the standard model.
In J. Werner L. Chalupa, editor, The Visual Neuro-sciences, pages 1640–1653. MIT Press, 2003.
[21] D. Ringach. Haphazard wiring of simple recep- tive fields and orientation columns in visual cor-tex. Journal of Neurophysiology, 92:468–476, 2004.
[22] O. Schwartz and E. Simoncelli.
statistics and sensory gain control. Nature Neuro-science, 4(8):819–825, 2001.
[23] D. Somers, S. Nelson, and M. Sur. An emergent model of orientation selectivity in cat visual cor- Appendix A: Normalization circuit
The model in Fig. 1 is based on [10], and the steady-state responses of the neurons (neglecting dynamics) are deter-mined by the following: The inhibitory signal G depends on the pooled responses. The particular choice of nonlinearity (square root ofsummed squares) yields a mathematically convenient form of normalization. Other choices can produce tunedresponses, although they are not as easy to track analytically. The response Ri is proportional to the input xi, subjectto an inhibitory signal operating multiplicatively. Such multiplicative interaction may arise from the inhibition ofshunting type, as noted in [24]. The rectification operation for ensuring positive neural response is denoted by [ ]+.
With a little algebra, which is the same as Eqn. 3, except for the positive constant c, related to the strength of inhibition. Because of c, theabove equation is not the true normalization in a mathematically rigorous sense, but as shown in Appendix B, thisapproximate normalization is enough to create Gaussian-like tuning.
Finally, resulting in normalized scalar product, capable of producing tuning behavior.
Appendix B: Optimal templates in normalized scalar product
Since the scalar product x · w measures the cosine of the angle between two vectors, the maximum occurs when those two vectors are parallel. Because it is also proportional to the length of the vector, a simple scalar product isnot as flexible as Gaussian function which can have an arbitrary center. We may assume that both vectors x and w are normalized ( x by pool normalization and w by Oja's rule [6], for example), so that only the direction within the input space is relevant. However, a more flexible, simple workaround is to assume a constant dummy input,which introduces an extra dimension and allows tuning for any w [14]. This constant may be the resting activity of a neuron.
Using the result of derivation from the previous section and assuming such dummy unit (indexed with d in wd andxd), the response of the normalizing neural circuit is given by X wj xj + wd xdj=1 which can be viewed as a normalized scalar product in (n + 1)-dimension. Then, using elementary calculus, it iseasy to verify that by choosing wd and xd, the maximum response occurs when x = w, for arbitrary wi.
Let's take the partial derivative: Setting ∂y = 0, x2j + x2d − xi  wj xj + wd xd .
Setting xi = wi, ∀i and simplifying the expression, As long as the above condition is met, any arbitrary w can serve as an optimal template, and since wd and xd can be freely chosen, it is easily satisfied. In particular, set x , as done in the simulations Appendix C: Maximum-like operation
With slightly different nonlinearities in normalization, similar gain control circuit could be used to performmaximum-like operation on the inputs to a neuron [26]. Consider the following divisive normalization (comparewith Eqn. 8): For sufficiently high q, iRi where wi = 1, the final output is the maximum of the inputs.

Source: http://www.mathtree.ru/FileTransfer/form1509298723_tmp.pdf

vetoquinol.ca

QUESTIONS ET RÉPONSES Conférence web — 9 février 2011 Web conference— February 8th, 2011 Doc, does my pet really need all the medications to treat his liver disease?Answers provided by speaker Dr. Lisa Carioto, DVM, DVSc, Diplomate ACVIM I was told that liver enzymes in dogs should be 2-3x elevated for the value to be significant.

B1245 pink 7

AUTUMN 2006 PinK Conference 2007 2-3 February Cranage Hall Conference Centre,Holmes Chapel, Cheshire Registration will be at 12noonon Friday 2 February and theconference will close at Prescribing – The Benefits as 12.30pm on Saturday 3 a Specialist Parkinson's Nurse All delegates will be offered asingle room and there will beno surcharges for