The neurophysiology of eye movements has been studied extensively, and several computational models have been proposed for decision-making processes that underlie the generation of eye movements towards a visual stimulus in a situation of uncertainty. One class of models, known as linear rise-to-threshold models, provides an economical, yet broadly applicable, explanation for the observed
variability in the latency between the onset of a peripheral visual target and the saccade towards it. So
far, however, these models do not account for the dynamics of learning across a sequence of stimuli, and they do not apply to situations in which subjects are exposed to events with conditional probabilities. In this methodological paper, we extend the class of linear rise-to-threshold models to address these limitations. Specifically, we reformulate previous models in terms of a generative, hierarchical model, by combining two separate sub-models that account for the interplay between learning of target locations across trials and the decision-making process within trials. We derive a maximum-likelihood scheme for
parameter estimation as well as model comparison on the basis of log likelihood ratios. The utility of the
integrated model is demonstrated by applying it to empirical saccade data acquired from three healthy
subjects. Model comparison is used (i) to show that eye movements do not only reflect marginal but also
conditional probabilities of target locations, and (ii) to reveal subject-specific learning profiles over trials.
These individual learning profiles are sufficiently distinct that test samples can be successfully mapped
onto the correct subject by a naïve Bayes classifier. Altogether, our approach extends the class of linear
rise-to-threshold models of saccadic decision making, overcomes some of their previous limitations, and
enables statistical inference both about learning of target locations across trials and the decision-making process within trials.