Figure 7:
(A) Illustration of the four conditions in the pro-saccade/anti-saccade task. The agent has to memorize the location of the probe and make a pro- or anti-saccade after a delay, dependent on the trial type indicated by the cue (white or black fixation point). The agent thus has to integrate the information throughout the trial and make an “exclusive or” decision upon presentation of the go signal. Of note, the gating policy in this trial, depicted in panel C, is applicable in each of the four conditions in this task. (B) Convergence rates for 2×500 simulated agents of two different types. The solid line depicts convergence with WorkMATe. The dotted line depicts performance with a modified version of the model, where the gating policy is not learned but correctly predefined and fixed beforehand. (C) Policy (see Figure 2E) of an example agent after convergence, during an antisaccade trial with a “left” probe. This gating policy applies to all trial conditions.

(A) Illustration of the four conditions in the pro-saccade/anti-saccade task. The agent has to memorize the location of the probe and make a pro- or anti-saccade after a delay, dependent on the trial type indicated by the cue (white or black fixation point). The agent thus has to integrate the information throughout the trial and make an “exclusive or” decision upon presentation of the go signal. Of note, the gating policy in this trial, depicted in panel C, is applicable in each of the four conditions in this task. (B) Convergence rates for 2×500 simulated agents of two different types. The solid line depicts convergence with WorkMATe. The dotted line depicts performance with a modified version of the model, where the gating policy is not learned but correctly predefined and fixed beforehand. (C) Policy (see Figure 2E) of an example agent after convergence, during an antisaccade trial with a “left” probe. This gating policy applies to all trial conditions.

Close Modal

or Create an Account

Close Modal
Close Modal