Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
P. Tupper
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (8): 1756–1789.
Published: 14 July 2022
Abstract
View article
PDF
Often in language and other areas of cognition, whether two components of an object are identical or not determines if it is well formed. We call such constraints identity effects . When developing a system to learn well-formedness from examples, it is easy enough to build in an identity effect. But can identity effects be learned from the data without explicit guidance? We provide a framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of learning algorithms, including deep feedforward neural networks trained via gradient-based algorithms (such as stochastic gradient descent or the Adam method), satisfies our criteria, dependent on the encoding of inputs. In some broader circumstances, we are able to provide adversarial examples that the network necessarily classifies incorrectly. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. This allows us to show similar effects to those predicted by theory for more realistic methods that violate some of the conditions of our theoretical results.
Includes: Supplementary data