Integrated Bayesian models of learning and decision making for saccadic eye movements

被引:31
作者
Brodersen, Kay H. [1 ,2 ]
Penny, Will D. [1 ]
Harrison, Lee M. [1 ]
Daunizeau, Jean [1 ]
Ruff, Christian C. [4 ]
Duzel, Emrah [4 ]
Friston, Karl J. [1 ]
Stephan, Klaas E. [1 ,3 ]
机构
[1] UCL, Inst Neurol, Wellcome Trust Ctr Neuroimaging, London WC1N 3BG, England
[2] Univ Oxford, John Radcliffe Hosp, Ctr Funct Magnet Resonance Imaging Brain FMRIB, Oxford OX3 9DU, England
[3] Univ Zurich, Inst Empir Res Econ, Branco Weiss Lab, CH-8006 Zurich, Switzerland
[4] UCL, Inst Cognit Neurosci, London WC1N 3AR, England
基金
英国惠康基金;
关键词
Saccades; Decision making; Reaction time; Bayesian learning; Model comparison;
D O I
10.1016/j.neunet.2008.08.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The neurophysiology of eye movements has been studied extensively, and several computational models have been proposed for decision-making processes that underlie the generation of eye movements towards a Visual stimulus in a Situation of uncertainty. One class of models, known as linear rise-to-threshold models, provides an economical, yet broadly applicable, explanation for the observed variability in the latency between the onset of a peripheral Visual target and the saccade towards it. So far, however, these models do not account for the dynamics of learning across a Sequence of stimuli, and they do not apply to situations in which Ssbjects are exposed to events with conditional probabilities. In this methodological paper, we extend the class of linear rise-to-threshold models to address these limitations. Specifically, we reformulate previous models in terms of a generative, hierarchical model. by combining two separate sub-models that account for the interplay between learning of target locations across trials and the decision-making process within trials. We derive a maximum-likelihood scheme for parameter estimation as well as model comparison on the basis of log likelihood ratios. The utility Of the integrated model is demonstrated by applying it to empirical saccade data acquired from three healthy subjects. Model comparison is used (i) to show that eye movements do not only reflect marginal but also conditional probabilities of target locations, and (ii) to reveal subject-specific learning profiles over trials. These individual learning profiles are Sufficiently distinct that test samples can be Successfully mapped onto the correct subject by a naive Bayes classifier. Altogether, our approach extends the class of linear rise-to-threshold models of saccadic decision making, overcomes some of their previous limitations, and enables statistical inference both about learning of target locations across trials and the decision-making process within trials. (C) 2008 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1247 / 1260
页数:14
相关论文
共 84 条
[1]   Neglect as a disorder of prior probability [J].
Anderson, Britt .
NEUROPSYCHOLOGIA, 2008, 46 (05) :1566-1569
[2]  
[Anonymous], 1986, Response times
[3]   Saccadic countermanding: a comparison of central and peripheral stop signals [J].
Asrress, KN ;
Carpenter, RHS .
VISION RESEARCH, 2001, 41 (20) :2645-2651
[4]   Modulation of neuronal activity by target uncertainty [J].
Basso, MA ;
Wurtz, RH .
NATURE, 1997, 389 (6646) :66-69
[5]  
Basso MA, 1998, J NEUROSCI, V18, P7519
[6]   Learning the value of information in an uncertain world [J].
Behrens, Timothy E. J. ;
Woolrich, Mark W. ;
Walton, Mark E. ;
Rushworth, Matthew F. S. .
NATURE NEUROSCIENCE, 2007, 10 (09) :1214-1221
[7]   LATER predicts saccade latency distributions in reading [J].
Carpenter, R. H. . S. ;
McDonald, Scott A. .
EXPERIMENTAL BRAIN RESEARCH, 2007, 177 (02) :176-183
[8]   NEURAL COMPUTATION OF LOG LIKELIHOOD IN CONTROL OF SACCADIC EYE-MOVEMENTS [J].
CARPENTER, RHS ;
WILLIAMS, MLL .
NATURE, 1995, 377 (6544) :59-62
[9]   Contrast, probability, and saccadic latency: Evidence for independence of detection and decision [J].
Carpenter, RHS .
CURRENT BIOLOGY, 2004, 14 (17) :1576-1580
[10]  
Carpenter RHS, 2001, NAT NEUROSCI, V4, P337, DOI 10.1038/85960