The Next-Generation CMOS Image Sensor: All 4-Coupled (A4C) Sensor


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

For animals including human beings, eyes play a critical role of distance measuring in order to survive. Predators use their eyes when hunting prey to measure the distance to their target, while preys use their eyes to avoid danger by noticing approaching predators. The competition for survival through the use of eyes began 540 million years ago, when living creatures with vision first appeared on Earth. Since then, the number of different animal species with eyes of different structures grew exponentially. Among the animals that first appeared in the early stages of evolution, there were a diverse range of species from the Cambropachycope that had one eye to Opabinia and Kylinxia, which had five eyes. Now, however, most animals have two eyes [1].

Seeing an object through two eyes cause slight difference in image location received by the left and right eyes. The brain calculates the difference to predict the distance to the target object. This is called binocular disparity[1] and most animals measure distance using binocular disparity through two eyes.

A technology called phase detection auto focus (PDAF) (link: previous article can be found at), which adjusts the focus on a subject using binocular disparity, was applied in smartphone cameras [2]. SK hynix developed a further improved All 4-Coupled (A4C) image sensor, improving both image quality and auto focus performance by using disparity for every pixel, while reading color information. This article will take a look at the three advantages of the newly developed All 4-Coupled (A4C) image sensor by SK hynix – quick and accurate focus detection, high-resolution image, and various applications.

A4C Sensor’s First Advantage – Quick and Accurate Focus Detection

The structure of the A4C sensor is shown in figure 1. Similar to the conventional Quad [3] sensor, it has a photodiode that converts light into an electric current and a color filter that selectively absorbs certain light wavelength. Unlike the Quad sensor, however, its structure is made up of one micro lens[2] on each group of four of the same color of pixels in the top left (TL), top right (TR), bottom left (BL), and bottom right (BR) corners.

Fig. 1     Structure of the A4C sensor

The auto focus feature of the A4C sensor is based on a mechanism where the subject is in focus if different rays of light from the subject converge to one focal point and the subject is out of focus if the rays do not converge to one point. In other words, if there is no difference in the intensity value of the four pixels that exist under one micro lens, the subject becomes in focus. If there is a difference in the value, then the subject is out of focus. For example, when capturing a subject like in figure 2, if the image is in focus like the first example in figure 2, the upper light path shown in red and the bottom light path shown in blue reach the same group of pixels under one micro lens. However, as in the case of the second and third examples in figure 2, if the subject is in front of or behind where the focal point meets, the light routes that travel the top and bottom paths into the lens do not converge to one micro lens. They reach different pixels, causing disparity[3]. Analyzing this disparity will let the sensor know to where it must move the location of the module lens to adjust focus.

Fig. 2     Disparity detection according to distance to subject

Compared to existing PDAF technology, the A4C sensor can calculate disparity at every pixel. It means accuracy is high and that it can secure more than 10 times the accuracy in low-light environment of less than 10 lux. Unlike conventional PDAF technology, which leverages binocular disparity, the A4C sensor leverages the disparity of four pixels on the top and bottom and the left and right corners under the micro lens. Therefore, its focus detection performance of subjects of horizontal or vertical directions is outstanding. Video demonstrates the performance gap between a conventional AF and A4C AF.

Video.  Comparing AF performance between conventional sensor and A4C sensor

(Left: normal AF / Right: A4C AF)

A4C Sensor’s Second Advantage – High-resolution Image

The A4C sensor’s output image can be produced by grouping the four pixels under each micro lens to improve sensitivity, or by outputting each individual pixel to enhance the resolution (e.g., 50 MP A4C sensor can output an image at 50 MP pixel resolution[4] or at 12.5 MP micro lens resolution[5]). When the sensor is used grouping the pixels under each micro lens, the resolution drops to one-fourth of when using each individual pixel. But by grouping four pixels, light sensitivity can be enhanced by four-fold. Therefore, it is more favorable to use the sensor on micro lens resolution when there is not enough light such as in nighttime, low-light environment.

On the other hand, it is more favorable to use individual pixels to improve resolution in places where there is enough light such as during the day or in outdoor environment. Because pixel resolution is four times higher than micro lens resolution, the clarity and detail expressions are enhanced. However, there is the issue of overcoming image quality that comes from disparity when using the A4C sensor with pixel resolution. When there are several subjects at different distances in one scene like in figure 3, the lens captures a high-resolution image of a subject that is in focus, shown in the solid line. On the other hand, disparity occurs for any subjects that are out of focus, for example, the dotted line in figure 3. This means that there is a difference in intensity between neighboring pixels under one micro lens, creating quality degradation in the form of grid patterns.

Fig. 3     Image of several subjects at different distances in one scene

Figure 4 is an image taken with an A4C sensor of subjects at different distances. The green subject closest to the sensor is in focus, while the other objects are out of focus and blurred. If you zoom in on the white area in figure 4, you can see there is image quality degradation shown in grid-like patterns that occur due to disparity shown in the left image of figure 4. To improve such degradation, SK hynix’s A4C sensor is equipped with proprietary A4C Phase Correction (APC) technology and Quad-to-Bayer (Q2B) technology, which allow the sensor to process the image and improve quality like the result in the right image of figure 4. SK hynix APC algorithm, especially, analyzes the ray of light that reflected off the subject to identify the module lens path used reach the image sensor. And it maintains the level of detail in the area where the subject is in focus, while correcting image artifacts caused by disparity in areas out of focus.

Fig. 4     A4C sensor’s output image

A4C Sensor’s Third Advantage – Various Applications

Other than the advantages of accurate focus detection and capturing high-resolution image, the A4C sensor can also be used in light field imaging applications. Light field imaging is a technology that reproduces the distribution of rays emanating from an object. It calculates the intensity of light in a scene and the precise direction the light rays travel from its locations and uses the information in computer vision applications like bokeh[6], refocus[7], and multi-view[8].

When capturing an image using the A4C sensor such as in figure 5, the ray of light reflected from the subject that is in line with the focal point travels in four different paths and reaches the four pixels under one micro lens. Therefore, if the sensor knows the intensity of A4C pixels and the location under the micro lens, it can figure out the pixel’s light intensity and the direction it traveled from.

Fig. 5     Capturing subject using A4C sensor

In particular, if the sensor creates a partial image by grouping pixels under the same micro lens, the pixels receive light that has reached the sensor by traveling the same module lens path, meaning it can reconstruct a partial image perceived from the same location of the module lens (e.g., if the sensor creates an image by grouping pixels in TL under a micro lens, it can create a partial image seen from the top left of the module lens). This partial image is called sub-aperture (SA) image [4], and the A4C sensor can generate four SA images in TL, TR, BL, and BR directions of the module lens. These four SA images of different points can be used in various computer vision applications as below.

1) Bokeh application: Using four SA images can improve bokeh image quality. A bokeh image created with an ordinary dual camera contains errors due to view angle between the left and right cameras and mechanical displacement. But by using SA images, which are partial images made with one A4C sensor, an error does not occur. Also, the sensor uses the four SA images to calculate depth information, delivering better accuracy than dual cameras that use two images.

2) Refocus application: SA images can support refocus, which refers to adjusting the focus to the point you want by using the A4C sensor’s intensity and direction information. Previous refocus technology also had mechanical error because it requires additional micro lens arrays on top of the image sensor [4]. However, the A4C sensor does not require micro lens array, resulting in no mechanical errors and improved accuracy.

3) Multi-view application: Essentially, SA image refers to image data on viewing a subject from different angles. Therefore, using four SA images can be leveraged in various multi-view applications such as 3D image restoration, 3D-based security applications, and improving low-light quality.

In conclusion, the A4C sensor developed by SK hynix can overcome the limits of conventional image sensors and deliver quick and accurate focus detection, high-resolution images, and be used in various light field imaging-based applications. SK hynix has not only developed the A4C sensor but is also developing new ISP technologies such as HDR, Non-Bayer, and Pixel Binning. Based on industry-best device and process technology, it developed micro pixels like 0.7um and 0.64um, which improve CIS pixel density, making them the core components of future information sensors. SK hynix CIS will contribute greatly to creating economic and social value by applying such technology in various applications such as smartphone cameras as well as bio, security, and autonomous vehicles.

For more on the latest advances in memory and image sensor technology, visit the SK hynix newsroom.

References

[1] Andrew Parker, In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution, 2003.

[2] SK hynix Newsroom, CMOS Image Sensor Innovation led by SK hynix, https://news.skhynix.com/the-visual-evolution-innovation-of-image-sensors/.

[3] SK hynix Newsroom, SK hynix Black Label Image Sensor, 1.0um Black Pearl, https://news.skhynix.co.kr/post/black-label-image-sensor.

[4] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. “Light Field Photography with a Hand-held Plenoptic Camera,” Stanford University Computer Science Technical Report CSTR 2005-02, April 2005.

[1] Binocular disparity: the difference in image location of an object seen by the left and right eyes

[2] Micro lens: a lens that concentrates light to the center to improve CIS efficiency

[3] Disparity: a displacement that occurs when one point of the subject converge on a different location to the sensor plane following a light path

[4] Pixel resolution: the number of pixels of the A4C sensor

[5] Micro lens resolution: the number of micro lenses of the A4C sensor. A4C sensor’s micro lens resolution is one-fourth the pixel resolution

[6] Bokeh: a technology that adjusts focus on subject, while blurring out the background

[7] Refocus: a technology that refocuses on a desired point after an image is captured

[8] Multi-view: a technology that uses multiple images taken in different angles to calibrate geometrically or spatial compounding






Source link