Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Ron van de Klundert
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Human Visual Cortex and Deep Convolutional Neural Network Care Deeply about Object Background
UnavailablePublisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (3): 551–566.
Published: 01 March 2024
FIGURES
| View All (9)
Abstract
View articletitled, Human Visual Cortex and Deep Convolutional Neural Network Care Deeply about Object Background
View
PDF
for article titled, Human Visual Cortex and Deep Convolutional Neural Network Care Deeply about Object Background
Deep convolutional neural networks (DCNNs) are able to partially predict brain activity during object categorization tasks, but factors contributing to this predictive power are not fully understood. Our study aimed to investigate the factors contributing to the predictive power of DCNNs in object categorization tasks. We compared the activity of four DCNN architectures with EEG recordings obtained from 62 human participants during an object categorization task. Previous physiological studies on object categorization have highlighted the importance of figure-ground segregation—the ability to distinguish objects from their backgrounds. Therefore, we investigated whether figure-ground segregation could explain the predictive power of DCNNs. Using a stimulus set consisting of identical target objects embedded in different backgrounds, we examined the influence of object background versus object category within both EEG and DCNN activity. Crucially, the recombination of naturalistic objects and experimentally controlled backgrounds creates a challenging and naturalistic task, while retaining experimental control. Our results showed that early EEG activity (< 100 msec) and early DCNN layers represent object background rather than object category. We also found that the ability of DCNNs to predict EEG activity is primarily influenced by how both systems process object backgrounds, rather than object categories. We demonstrated the role of figure-ground segregation as a potential prerequisite for recognition of object features, by contrasting the activations of trained and untrained (i.e., random weights) DCNNs. These findings suggest that both human visual cortex and DCNNs prioritize the segregation of object backgrounds and target objects to perform object categorization. Altogether, our study provides new insights into the mechanisms underlying object categorization as we demonstrated that both human visual cortex and DCNNs care deeply about object background.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (12): 2390–2405.
Published: 01 November 2022
FIGURES
| View All (8)
Abstract
View articletitled, A Critical Test of Deep Convolutional Neural Networks' Ability to Capture Recurrent Processing in the Brain Using Visual Masking
View
PDF
for article titled, A Critical Test of Deep Convolutional Neural Networks' Ability to Capture Recurrent Processing in the Brain Using Visual Masking
Recurrent processing is a crucial feature in human visual processing supporting perceptual grouping, figure-ground segmentation, and recognition under challenging conditions. There is a clear need to incorporate recurrent processing in deep convolutional neural networks, but the computations underlying recurrent processing remain unclear. In this article, we tested a form of recurrence in deep residual networks (ResNets) to capture recurrent processing signals in the human brain. Although ResNets are feedforward networks, they approximate an excitatory additive form of recurrence. Essentially, this form of recurrence consists of repeating excitatory activations in response to a static stimulus. Here, we used ResNets of varying depths (reflecting varying levels of recurrent processing) to explain EEG activity within a visual masking paradigm. Sixty-two humans and 50 artificial agents (10 ResNet models of depths −4, 6, 10, 18, and 34) completed an object categorization task. We show that deeper networks explained more variance in brain activity compared with shallower networks. Furthermore, all ResNets captured differences in brain activity between unmasked and masked trials, with differences starting at ∼98 msec (from stimulus onset). These early differences indicated that EEG activity reflected “pure” feedforward signals only briefly (up to ∼98 msec). After ∼98 msec, deeper networks showed a significant increase in explained variance, which peaks at ∼200 msec, but only within unmasked trials, not masked trials. In summary, we provided clear evidence that excitatory additive recurrent processing in ResNets captures some of the recurrent processing in humans.