Pixel level processing - Why, what, and how?

被引:49
作者
El Gamal, A [1 ]
Yang, D [1 ]
Fowler, B [1 ]
机构
[1] Stanford Univ, Informat Syst Lab, Stanford, CA 94305 USA
来源
SENSORS, CAMERAS, AND APPLICATIONS FOR DIGITAL PHOTOGRAPHY | 1999年 / 3650卷
关键词
pixel level processing; pixel level ADC;
D O I
10.1117/12.342849
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Pixel level processing promises many significant advantages including high SNR, low power, and the ability to adapt image capture and processing to different environments by,processing signals during integration. However, the severe limitation on pixel size has precluded its mainstream use. In this paper we argue that CMOS technology scaling will make pixel level processing increasingly popular. Since pixel size is limited primarily by optical and light collection considerations, as CMOS technology scales, an increasing number of transistors can be integrated at the pixel. We first demonstrate that our argument is supported by the evolution of CMOS image sensors from PPS to APS. We then briefly survey existing work on analog pixel level processing and pixel level ADC. We classify analog processing into intrapixel and interpixel. Intrapixel processing is mainly used to improve sensor performance, while interpixel processing is used to perform early vision processing. We briefly describe the operation and architecture of our recently developed pixel level MCBS ADC. Finally we discuss future directions in pixel level processing. We argue that interpixel analog processing is not likely to become mainstream even for computational sensors due to the poor scaling of analog compared to digital circuits. We argue that pixel level A/D conversion will become increasingly popular since-it minimizes analog processing, and requires only simple and imprecise circuits to implement. We then discuss the inclusion of digital memory and interpixel digital processing in future technologies to implement programmable digital pixel sensors.
引用
收藏
页码:2 / 13
页数:12
相关论文
共 51 条
[1]   On sensor image compression [J].
Aizawa, K ;
Ohno, H ;
Egi, Y ;
Hamamoto, T ;
Hatori, M ;
Maruyama, H ;
Yamazaki, J .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 1997, 7 (03) :543-548
[2]   A real-time, miniaturized optical sensor for motion estimation and time-to-crash detection [J].
Ancona, N ;
Creanza, G ;
Fiore, D ;
Tangorra, R ;
Dierickx, B ;
Meynants, G ;
Sheffer, D .
ADVANCED FOCAL PLANE ARRAYS AND ELECTRONIC CAMERAS, 1996, 2950 :75-85
[3]   Large-area, low-noise amorphous silicon imaging system [J].
Apte, RB ;
Street, RA ;
Ready, SE ;
Jared, DA ;
Moore, AM ;
Weisfield, RL ;
Rodericks, TA ;
Granberg, TA .
SOLID STATE SENSOR ARRAYS: DEVELOPMENT AND APPLICATIONS II, 1998, 3301 :2-8
[4]  
ARREGUIT X, 1996, ISSCC
[5]   Global feature extraction operations for near-sensor image processing [J].
Astrom, A ;
Forchheimer, R ;
Eklund, JE .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 1996, 5 (01) :102-110
[6]  
Brajovic V, 1996, IEEE INT CONF ROBOT, P1638, DOI 10.1109/ROBOT.1996.506947
[7]  
Decker S J, 1998, ISSCC, P176
[8]  
Delbruck T., 1993, THESIS CALTECH
[9]  
DENYER P, 1993, P IEEE 1998 CUST INT
[10]  
DENYER P, 1991, VLSI 91