Auto-Adaptive Multi-Sensor Architecture

Abstract : To overcome luminosity problems, modern embedded vision systems often integrate technologically heterogeneous sensors. Also, it has to provide different functionalities such as photo or video mode, image improvement or data fusion, according to the user environment. Therefore, nowadays vision systems should be context-aware and adapt their performance parameters automatically. In this context, we propose a novel auto-adaptive architecture enabling on-the-fly and automatic frame rate and resolution adaptation by a frequency tuning method. This method also intends to reduce power consumption as an alternative to existing power gating method. Performance evaluation in a FPGA implementation demonstrates an inter-frame adaptation capability with a relative low area overhead. I. INTRODUCTION From decades, the ability of computer vision systems increases thanks to the multiplication of integrated sensors. Multi-sensor systems enable many high-level vision applications such as stereo vision, data fusion [1] or 3D stereo view [2]. Also smart camera networks take advantage of the multi-sensor concept for large-scale surveillance applications [3]. More and more vision systems involve several heterogeneous sensors such as color, infrared or intensified low-light sensor [4] to overcome the variable luminosity conditions or improve the application robustness. Frequently, the considered vision system accomplishes various tasks such as video streaming, photo capture or high level processing (i.e. face detection, object tracking, ...). Each one of these tasks imposes different performance computing ability to the hardware resources, according to the applicative context and used sensor. That is why, nowadays vision systems have to be context-aware and to possess the ability to adapt their performance according to the user environment [5]. Fig. 1 illustrates the differences between video and photo user mode parameters: latency, frame rate, resolution, image quality and power consumption. While a video mode needs a high frame rate and low latency, a photo mode rather expects a higher resolution and higher image quality. In this context, we expect the system architecture adapt itself on-the-fly to the required frame rate or resolution while minimizing the use-case transition time when the user mode changes. In addition, the frame rate and the resolution of the involved sensors are not supposed to be known in advance. Numerous adaptable architectures exist for high-performance image processing [6]–[8] and also even for energy aware heterogeneous vision systems [2], they do not enable such dynamic adaptation of the frame rate or the resolution. In this paper, we propose a novel pixel frequency tuning approach for heterogeneous multi-sensor vision systems. The
Document type :
Conference papers
Complete list of metadatas

Cited literature [10 references]  Display  Hide  Download

https://hal-upec-upem.archives-ouvertes.fr/hal-01265219
Contributor : Eva Dokladalova <>
Submitted on : Monday, February 1, 2016 - 3:58:39 PM
Last modification on : Thursday, February 7, 2019 - 5:23:56 PM
Long-term archiving on : Friday, November 11, 2016 - 10:49:14 PM

File

ISCAS_2016_for_Review.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01265219, version 1

Citation

Ali Isavudeen, Nicolas Ngan, Eva Dokladalova, Mohamed Akil. Auto-Adaptive Multi-Sensor Architecture. IEEE International symposium on circuits and systems, ISCAS 2016, IEEE, May 2016, Montréal, Canada. ⟨hal-01265219⟩

Share

Metrics

Record views

395

Files downloads

230