Skip to main content Link Menu Expand (external link) Document Search Copy Copied

AI Semiconductor Design Contest

2023.10.26

TL;DR : One of the first projects that got me into model compression and DNN hardware implementation. My main contribution was pruning and quantization of the YOLOv3-tiny model.
Design Title: Deep Learning ISP Sensor for Low-Power & High-Efficiency Inference.


Brief Overview

In many real-life cases, sensors do not see curated images, but rather noisy and undetectable images. For this project, we assumed an object detection task and proposed a design idea for a deep learning Image Signal Processor (ISP) + YOLO model that is efficient during inference. My main contribution was the pruning and quantization of the YOLOv3-tiny model, reducing the memory/computational resources needed during runtime.


Motivation & Background

IoT deep learning sensors must achieve the following performance goals:

  1. Stable inference with intermittent and unstable power supplies.
  2. Low inference cost in terms of memory and computation.

ISPs are integral to cameras, smartphones, CCTVs, and various other computer vision technologies. They consist of modules like Color Filter, Noise Reduction, White Balance, Color Correction, Gamma Correction, and Color Enhancement. These modules involve numerous processing steps that demand significant computation and power, making ISPs susceptible to intermittent power and resulting in rapid power consumption.

We focus on object detection tasks (one of the most common real-life problems for autonomous driving), which requires not only the ISP for preprocessing of the image, but also a detection backbone which adds on to this computational burden.

Therefore, proposed a design idea for a deep learning Image Signal Processor (ISP) + YOLO model that is efficient memory/computation-wise during inference.


Design Idea

Pruning of YOLOv3-tiny

Quantization of YOLOv3-tiny

Ablation Studies