Recent work suggests that changing convolutional neural network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function. To understand this relationship fully requires a way of quantitatively comparing trained networks. The fields of electrophysiology and psychophysics have developed a wealth of methods for characterizing visual systems that permit such comparisons. Inspired by these methods, we propose an approach to obtaining spatial and color tuning curves for convolutional neurons that can be used to classify cells in terms of their spatial and color opponency. We perform these classifications for a range of CNNs with different depths and bottleneck widths. Our key finding is that networks with a bottleneck show a strong functional organization: almost all cells in the bottleneck layer become both spatially and color opponent, and cells in the layer following the bottleneck become nonopponent. The color tuning data can further be used to form a rich understanding of how color a network encodes color. As a concrete demonstration, we show that shallower networks without a bottleneck learn a complex nonlinear color system, whereas deeper networks with tight bottlenecks learn a simple channel opponent code in the bottleneck layer. We develop a method of obtaining a hue sensitivity curve for a trained CNN that enables high-level insights that complement the low-level findings from the color tuning data. We go on to train a series of networks under different conditions to ascertain the robustness of the discussed results. Ultimately our methods and findings coalesce with prior art, strengthening our ability to interpret trained CNNs and furthering our understanding of the connection between architecture and learned representation. Trained models and code for all experiments are available at https://github.com/ecs-vlc/opponency.

You do not currently have access to this content.