Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Michael C. Mozer
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2021) 33 (2): 376–397.
Published: 01 February 2021
FIGURES
| View All (5)
Abstract
View article
PDF
Our goal is to understand and optimize human concept learning by predicting the ease of learning of a particular exemplar or category. We propose a method for estimating ease values , quantitative measures of ease of learning, as an alternative to conducting costly empirical training studies. Our method combines a psychological embedding of domain exemplars with a pragmatic categorization model. The two components are integrated using a radial basis function network (RBFN) that predicts ease values. The free parameters of the RBFN are fit using human similarity judgments, circumventing the need to collect human training data to fit more complex models of human categorization. We conduct two category-training experiments to validate predictions of the RBFN. We demonstrate that an instance-based RBFN outperforms both a prototype-based RBFN and an empirical approach using the raw data. Although the human data were collected across diverse experimental conditions, the predicted ease values strongly correlate with human learning performance. Training can be sequenced by (predicted) ease, achieving what is known as fading in the psychology literature and curriculum learning in the machine-learning literature, both of which have been shown to facilitate learning.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2007) 19 (2): 371–403.
Published: 01 February 2007
Abstract
View article
PDF
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity (STDP). We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron's response variability to a given presynaptic input, causing the neuron's output to become more reliable in the face of noise. Using an objective function that minimizes response variability and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena, including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of unsupervised cortical adaptation.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (5): 1045–1064.
Published: 01 May 2001
Abstract
View article
PDF
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (5): 650–665.
Published: 01 September 1992
Abstract
View article
PDF
Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent backpropagation to complex-valued units.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (4): 447–457.
Published: 01 December 1990
Abstract
View article
PDF
Consider a robot wandering around an unfamiliar environment, performing actions and observing the consequences. The robot's task is to construct an internal model of its environment, a model that will allow it to predict the effects of its actions and to determine what sequences of actions to take to reach particular goal states. Rivest and Schapire (1987a,b; Schapire 1988) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of “finite state” environments. The heart of this algorithm is a clever representation of the environment called an update graph . We have developed a connectionist implementation of the update graph using a highly specialized network architecture. With backpropagation learning and a trivial exploration strategy — choosing random actions — the connectionist network can outperform the Rivest and Schapire algorithm on simple problems. Our approach has additional virtues, including the fact that the network can accommodate stochastic environments and that it suggests generalizations of the update graph representation that do not arise from a traditional, symbolic perspective.