Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Thomas Hannagan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (5): 1261–1276.
Published: 01 May 2013
FIGURES
| View All (10)
Abstract
View article
PDF
Convolutional models of object recognition achieve invariance to spatial transformations largely because of the use of a suitably defined pooling operator. This operator typically takes the form of a max or average function defined across units tuned to the same feature. As a model of the brain's ventral pathway, where computations are carried out by weighted synaptic connections, such pooling can lead to spatial invariance only if the weights that connect similarly tuned units to a given pooling unit are of approximately equal strengths. How identical weights can be learned in the face of nonuniformly distributed data remains unclear. In this letter, we show how various versions of the trace learning rule can help solve this problem. This allows us in turn to explain previously published results and make recommendations as to the optimal rule for invariance learning.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (1): 251–283.
Published: 01 January 2011
FIGURES
| View All (11)
Abstract
View article
PDF
We studied the feedforward network proposed by Dandurand et al. ( 2010 ), which maps location-specific letter inputs to location-invariant word outputs, probing the hidden layer to determine the nature of the code. Hidden patterns for words were densely distributed, and K -means clustering on single letter patterns produced evidence that the network had formed semi-location-invariant letter representations during training. The possible confound with superseding bigram representations was ruled out, and linear regressions showed that any word pattern was well approximated by a linear combination of its constituent letter patterns. Emulating this code using overlapping holographic representations (Plate, 1995 ) uncovered a surprisingly acute and useful correspondence with the network, stemming from a broken symmetry in the connection weight matrix and related to the group-invariance theorem (Minsky & Papert, 1969 ). These results also explain how the network can reproduce relative and transposition priming effects found in humans.