Learning from examples, agent teams and the concept of reflection

被引:3
作者
Beyer, U [1 ]
Smieja, FJ [1 ]
机构
[1] GERMAN NATL RES CTR COMP SCI,D-53754 ST AUGUSTIN,GERMANY
关键词
agent accuracy; approximation rate; reflection; confidence; team;
D O I
10.1142/S0218001496000190
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from examples has a number of distinct algebraic forms, depending on what is to be learned from the available information. One of these forms is [GRAPHICS] where the input-output tuple (x, y) is the available information, and G represents the process determining the mapping from x to y. Various models, y = f(x), of G can be constructed using the information from the (x, y) tuples. In general, and for real-world problems, it is not reasonable to expect the exact representation of G to be found (i.e. a formula that is correct for all possible (x, y)). The modeling procedure involves finding a satisfactory set of basis functions, their combination, a coding for (x, y) and then to adjust all free parameters in an approximation process, to construct a final model. The approximation process can bring the accuracy of the model to a certain level, after which it becomes increasingly expensive to improve further. Further improvement may be gained through constructing a number of agents {alpha}, each of which develops its own model f(alpha). These may then be combined in a second modeling phase to synthesize a team model. If each agent has the ability for internal reflection the combination in a team framework becomes more profitable. We describe reflection and the generation of a confidence function: the agent's estimate of the correctness of each of its predictions. The presence of reflective information is shown to increase significantly the performance of a team.
引用
收藏
页码:251 / 272
页数:22
相关论文
共 35 条
[1]  
[Anonymous], NEURAL COMPUT
[2]   What Size Net Gives Valid Generalization? [J].
Baum, Eric B. ;
Haussler, David .
NEURAL COMPUTATION, 1989, 1 (01) :151-160
[3]  
BELUR V, 1986, NEAREST NEIGHBOR NN
[4]  
BEYER U, 1993, 732 GMD GERM NAT RES
[5]   LEARNABILITY AND THE VAPNIK-CHERVONENKIS DIMENSION [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
JOURNAL OF THE ACM, 1989, 36 (04) :929-965
[6]   UPPER AND LOWER PROBABILITIES INDUCED BY A MULTIVALUED MAPPING [J].
DEMPSTER, AP .
ANNALS OF MATHEMATICAL STATISTICS, 1967, 38 (02) :325-&
[7]   1977 RIETZ LECTURE - BOOTSTRAP METHODS - ANOTHER LOOK AT THE JACKKNIFE [J].
EFRON, B .
ANNALS OF STATISTICS, 1979, 7 (01) :1-26
[8]  
FRANK J, 1992, P 11 ICPR HAG NETH, P611
[9]   NEURAL NETWORKS AND THE BIAS VARIANCE DILEMMA [J].
GEMAN, S ;
BIENENSTOCK, E ;
DOURSAT, R .
NEURAL COMPUTATION, 1992, 4 (01) :1-58
[10]  
HAMPSHIRE JB, 1990, P 1990 CONN MOD SUMM