Figure 3:
A schematic of the network configuration used for the experiments. Each layer of the network has the same dimensions, though the density and receptive field size tend to increase with layer depth. The input and edge response layers have fixed weights, as does the feedforward input to the border ownership layer. The border ownership, grouping, and proto-object layers all perform learning. These layers are interconnected to each other through feedforward and feedback projections. Each layer is parameterized by the number of neurons in its competitive columns, with different network topologies having different configurations. Layers show actual responses for depicted input, with the polarity colored according to preferred orientation for the border ownership layer.

A schematic of the network configuration used for the experiments. Each layer of the network has the same dimensions, though the density and receptive field size tend to increase with layer depth. The input and edge response layers have fixed weights, as does the feedforward input to the border ownership layer. The border ownership, grouping, and proto-object layers all perform learning. These layers are interconnected to each other through feedforward and feedback projections. Each layer is parameterized by the number of neurons in its competitive columns, with different network topologies having different configurations. Layers show actual responses for depicted input, with the polarity colored according to preferred orientation for the border ownership layer.

Close Modal

or Create an Account

Close Modal
Close Modal