Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-12 of 12
H. Steven Scholte
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (3): 551–566.
Published: 01 March 2024
FIGURES
| View All (9)
Abstract
View article
PDF
Deep convolutional neural networks (DCNNs) are able to partially predict brain activity during object categorization tasks, but factors contributing to this predictive power are not fully understood. Our study aimed to investigate the factors contributing to the predictive power of DCNNs in object categorization tasks. We compared the activity of four DCNN architectures with EEG recordings obtained from 62 human participants during an object categorization task. Previous physiological studies on object categorization have highlighted the importance of figure-ground segregation—the ability to distinguish objects from their backgrounds. Therefore, we investigated whether figure-ground segregation could explain the predictive power of DCNNs. Using a stimulus set consisting of identical target objects embedded in different backgrounds, we examined the influence of object background versus object category within both EEG and DCNN activity. Crucially, the recombination of naturalistic objects and experimentally controlled backgrounds creates a challenging and naturalistic task, while retaining experimental control. Our results showed that early EEG activity (< 100 msec) and early DCNN layers represent object background rather than object category. We also found that the ability of DCNNs to predict EEG activity is primarily influenced by how both systems process object backgrounds, rather than object categories. We demonstrated the role of figure-ground segregation as a potential prerequisite for recognition of object features, by contrasting the activations of trained and untrained (i.e., random weights) DCNNs. These findings suggest that both human visual cortex and DCNNs prioritize the segregation of object backgrounds and target objects to perform object categorization. Altogether, our study provides new insights into the mechanisms underlying object categorization as we demonstrated that both human visual cortex and DCNNs care deeply about object background.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (12): 2390–2405.
Published: 01 November 2022
FIGURES
| View All (8)
Abstract
View article
PDF
Recurrent processing is a crucial feature in human visual processing supporting perceptual grouping, figure-ground segmentation, and recognition under challenging conditions. There is a clear need to incorporate recurrent processing in deep convolutional neural networks, but the computations underlying recurrent processing remain unclear. In this article, we tested a form of recurrence in deep residual networks (ResNets) to capture recurrent processing signals in the human brain. Although ResNets are feedforward networks, they approximate an excitatory additive form of recurrence. Essentially, this form of recurrence consists of repeating excitatory activations in response to a static stimulus. Here, we used ResNets of varying depths (reflecting varying levels of recurrent processing) to explain EEG activity within a visual masking paradigm. Sixty-two humans and 50 artificial agents (10 ResNet models of depths −4, 6, 10, 18, and 34) completed an object categorization task. We show that deeper networks explained more variance in brain activity compared with shallower networks. Furthermore, all ResNets captured differences in brain activity between unmasked and masked trials, with differences starting at ∼98 msec (from stimulus onset). These early differences indicated that EEG activity reflected “pure” feedforward signals only briefly (up to ∼98 msec). After ∼98 msec, deeper networks showed a significant increase in explained variance, which peaks at ∼200 msec, but only within unmasked trials, not masked trials. In summary, we provided clear evidence that excitatory additive recurrent processing in ResNets captures some of the recurrent processing in humans.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2022) 34 (4): 655–674.
Published: 05 March 2022
FIGURES
| View All (6)
Abstract
View article
PDF
Spatial attention enhances sensory processing of goal-relevant information and improves perceptual sensitivity. Yet, the specific neural mechanisms underlying the effects of spatial attention on performance are still contested. Here, we examine different attention mechanisms in spiking deep convolutional neural networks. We directly contrast effects of precision (internal noise suppression) and two different gain modulation mechanisms on performance on a visual search task with complex real-world images. Unlike standard artificial neurons, biological neurons have saturating activation functions, permitting implementation of attentional gain as gain on a neuron's input or on its outgoing connection. We show that modulating the connection is most effective in selectively enhancing information processing by redistributing spiking activity and by introducing additional task-relevant information, as shown by representational similarity analyses. Precision only produced minor attentional effects in performance. Our results, which mirror empirical findings, show that it is possible to adjudicate between attention mechanisms using more biologically realistic models and natural stimuli.
Journal Articles
Michael Rojek-Giffin, Mael Lebreton, H. Steven Scholte, Frans van Winden, K. Richard Ridderinkhof ...
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2020) 32 (7): 1276–1288.
Published: 01 July 2020
FIGURES
| View All (6)
Abstract
View article
PDF
Competitions are part and parcel of daily life and require people to invest time and energy to gain advantage over others and to avoid (the risk of) falling behind. Whereas the behavioral mechanisms underlying competition are well documented, its neurocognitive underpinnings remain poorly understood. We addressed this using neuroimaging and computational modeling of individual investment decisions aimed at exploiting one's counterpart (“attack”) or at protecting against exploitation by one's counterpart (“defense”). Analyses revealed that during attack relative to defense (i) individuals invest less and are less successful; (ii) computations of expected reward are strategically more sophisticated (reasoning level k = 4 vs. k = 3 during defense); (iii) ventral striatum activity tracks reward prediction errors; (iv) risk prediction errors were not correlated with neural activity in either ROI or whole-brain analyses; and (v) successful exploitation correlated with neural activity in the bilateral ventral striatum, left OFC, left anterior insula, left TPJ, and lateral occipital cortex. We conclude that, in economic contests, coming out ahead (vs. not falling behind) involves sophisticated strategic reasoning that engages both reward and value computation areas and areas associated with theory of mind.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (7): 1239–1252.
Published: 01 July 2017
FIGURES
| View All (4)
Abstract
View article
PDF
Perception is inherently subjective, and individual differences in phenomenology are well illustrated by the phenomenon of synesthesia (highly specific, consistent, and automatic cross-modal experiences, in which the external stimulus corresponding to the additional sensation is absent). It is unknown why some people develop synesthesia and others do not. In the current study, we tested whether neural markers related to having synesthesia in the family were evident in brain function and structure. Relatives of synesthetes (who did not have any type of synesthesia themselves) and matched controls read specially prepared books with colored letters for several weeks and were scanned before and after reading using magnetic resonance imaging. Effects of acquired letter–color associations were evident in brain activation. Training-related activation (while viewing black letters) in the right angular gyrus of the parietal lobe was directly related to the strength of the learned letter–color associations (behavioral Stroop effect). Within this obtained angular gyrus ROI, the familial trait of synesthesia related to brain activation differences while participants viewed both black and colored letters. Finally, we compared brain structure using voxel-based morphometry and diffusion tensor imaging to test for group differences and training effects. One cluster in the left superior parietal lobe had significantly more coherent white matter in the relatives compared with controls. No evidence for experience-dependent plasticity was obtained. For the first time, we present evidence suggesting that the (nonsynesthete) relatives of grapheme–color synesthetes show atypical grapheme processing as well as increased brain connectivity.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (7): 1344–1359.
Published: 01 July 2015
FIGURES
| View All (7)
Abstract
View article
PDF
Action selection often requires the transformation of visual information into motor plans. Preventing premature responses may entail the suppression of visual input and/or of prepared muscle activity. This study examined how the quality of visual information affects frontobasal ganglia (BG) routes associated with response selection and inhibition. Human fMRI data were collected from a stop task with visually degraded or intact face stimuli. During go trials, degraded spatial frequency information reduced the speed of information accumulation and response cautiousness. Effective connectivity analysis of the fMRI data showed action selection to emerge through the classic direct and indirect BG pathways, with inputs deriving form both prefrontal and visual regions. When stimuli were degraded, visual and prefrontal regions processing the stimulus information increased connectivity strengths toward BG, whereas regions evaluating visual scene content or response strategies reduced connectivity toward BG. Response inhibition during stop trials recruited the indirect and hyperdirect BG pathways, with input from visual and prefrontal regions. Importantly, when stimuli were nondegraded and processed fast, the optimal stop model contained additional connections from prefrontal to visual cortex. Individual differences analysis revealed that stronger prefrontal-to-visual connectivity covaried with faster inhibition times. Therefore, prefrontal-to-visual cortex connections appear to suppress the fast flow of visual input for the go task, such that the inhibition process can finish before the selection process. These results indicate response selection and inhibition within the BG to emerge through the interplay of top–down adjustments from prefrontal and bottom–up input from sensory cortex.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (2): 365–379.
Published: 01 February 2014
FIGURES
| View All (8)
Abstract
View article
PDF
The visual system has been commonly subdivided into two segregated visual processing streams: The dorsal pathway processes mainly spatial information, and the ventral pathway specializes in object perception. Recent findings, however, indicate that different forms of interaction (cross-talk) exist between the dorsal and the ventral stream. Here, we used TMS and concurrent EEG recordings to explore these interactions between the dorsal and ventral stream during figure-ground segregation. In two separate experiments, we used repetitive TMS and single-pulse TMS to disrupt processing in the dorsal (V5/HMT + ) and the ventral (lateral occipital area) stream during a motion-defined figure discrimination task. We presented stimuli that made it possible to differentiate between relatively low-level (figure boundary detection) from higher-level (surface segregation) processing steps during figure-ground segregation. Results show that disruption of V5/HMT + impaired performance related to surface segregation; this effect was mainly found when V5/HMT + was perturbed in an early time window (100 msec) after stimulus presentation. Surprisingly, disruption of the lateral occipital area resulted in increased performance scores and enhanced neural correlates of surface segregation. This facilitatory effect was also mainly found in an early time window (100 msec) after stimulus presentation. These results suggest a “push–pull” interaction in which dorsal and ventral extrastriate areas are being recruited or inhibited depending on stimulus category and task demands.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (10): 1579–1596.
Published: 01 October 2013
FIGURES
| View All (7)
Abstract
View article
PDF
It has been proposed that visual attention and consciousness are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16–22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12–18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555–560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700–707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844–848, 2001], but it is unclear which of the two ingredients—consciousness or attention—is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning, suggesting that the memory trace that is formed during inattention is latent until accessed. The results suggest that learning requires consciousness, and not attention, and further strengthen the idea that consciousness is separate from attention.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (4): 965–974.
Published: 01 April 2012
FIGURES
| View All (7)
Abstract
View article
PDF
Consciousness can be manipulated in many ways. Here, we seek to understand whether two such ways, visual masking and pharmacological intervention, share a common pathway in manipulating visual consciousness. We recorded EEG from human participants who performed a backward-masking task in which they had to detect a masked figure form its background (masking strength was varied across trials). In a within-subject design, participants received dextromethorphan (a N -methyl- d -aspartate receptor antagonist), lorazepam (LZP; a GABA A receptor agonist), scopolamine (a muscarine receptor antagonist), or placebo. The behavioral results show that detection rate decreased with increasing masking strength and that of all the drugs, only LZP induced a further decrease in detection rate. Figure-related ERP signals showed three neural events of interest: (1) an early posterior occipital and temporal generator (94–121 msec) that was not influenced by any pharmacological manipulation nor by masking, (2) a later bilateral perioccipital generator (156–211 msec) that was reduced by masking as well as LZP (but not by any other drugs), and (3) a late bilateral occipital temporal generator (293–387 msec) that was mainly affected by masking. Crucially, only the intermediate neural event correlated with detection performance. In combination with previous findings, these results suggest that LZP and masking both reduce visual awareness by means of modulating late activity in the visual cortex but leave early activation intact. These findings provide the first evidence for a common mechanism for these two distinct ways of manipulating consciousness.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (12): 3734–3745.
Published: 01 December 2011
FIGURES
| View All (5)
Abstract
View article
PDF
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Journal Articles
Simon van Gaal, H. Steven Scholte, Victor A. F. Lamme, Johannes J. Fahrenfort, K. Richard Ridderinkhof
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2011) 23 (2): 382–390.
Published: 01 February 2011
FIGURES
Abstract
View article
PDF
The presupplementary motor area (pre-SMA) is considered key in contributing to voluntary action selection during response conflict. Here we test whether individual differences in the ability to select appropriate actions in the face of strong (conscious) and weak (virtually unconscious) distracting alternatives are related to individual variability in pre-SMA anatomy. To this end, we scanned 58 participants, who performed a masked priming task in which conflicting response tendencies were elicited either consciously (through primes that were weakly masked) or virtually unconsciously (strongly masked primes), with structural magnetic resonance imaging. Voxel-based morphometry revealed that individual differences in pre-SMA gray-matter density are related to subjects' ability to voluntary select the correct action in the face of conflict, irrespective of the awareness level of conflict-inducing stimuli. These results link structural anatomy to individual differences in cognitive control ability, and provide support for the role of the pre-SMA in the selection of appropriate actions in situations of response conflict. Furthermore, these results suggest that flexible and voluntary behavior requires efficiently dealing with competing response tendencies, even those that are activated automatically and unconsciously.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (11): 2097–2109.
Published: 01 November 2008
Abstract
View article
PDF
In texture segregation, an example of scene segmentation, we can discern two different processes: texture boundary detection and subsequent surface segregation [Lamme, V. A. F., Rodriguez-Rodriguez, V., & Spekreijse, H. Separate processing dynamics for texture elements, boundaries and surfaces in primary visual cortex of the macaque monkey. Cerebral Cortex, 9, 406–413, 1999]. Neural correlates of texture boundary detection have been found in monkey V1 [Sillito, A. M., Grieve, K. L., Jones, H. E., Cudeiro, J., & Davis, J. Visual cortical mechanisms detecting focal orientation discontinuities. Nature, 378, 492–496, 1995; Grosof, D. H., Shapley, R. M., & Hawken, M. J. Macaque-V1 neurons can signal illusory contours. Nature, 365, 550–552, 1993], but whether surface segregation occurs in monkey V1 [Rossi, A. F., Desimone, R., & Ungerleider, L. G. Contextual modulation in primary visual cortex of macaques. Journal of Neuroscience, 21, 1698–1709, 2001; Lamme, V. A. F. The neurophysiology of figure ground segregation in primary visual-cortex. Journal of Neuroscience, 15, 1605–1615, 1995], and whether boundary detection or surface segregation signals can also be measured in human V1, is more controversial [Kastner, S., De Weerd, P., & Ungerleider, L. G. Texture segregation in the human visual cortex: A functional MRI study. Journal of Neurophysiology, 83, 2453–2457, 2000]. Here we present electroencephalography (EEG) and functional magnetic resonance imaging data that have been recorded with a paradigm that makes it possible to differentiate between boundary detection and scene segmentation in humans. In this way, we were able to show with EEG that neural correlates of texture boundary detection are first present in the early visual cortex around 92 msec and then spread toward the parietal and temporal lobes. Correlates of surface segregation first appear in temporal areas (around 112 msec) and from there appear to spread to parietal, and back to occipital areas. After 208 msec, correlates of surface segregation and boundary detection also appear in more frontal areas. Blood oxygenation level-dependent magnetic resonance imaging results show correlates of boundary detection and surface segregation in all early visual areas including V1. We conclude that texture boundaries are detected in a feedforward fashion and are represented at increasing latencies in higher visual areas. Surface segregation, on the other hand, is represented in “reverse hierarchical” fashion and seems to arise from feedback signals toward early visual areas such as V1.