Image Sensors
Technology - Technology

Introduction

Similar to the human eye image sensors consists of small light sensitive receptors = photodiodes. These photodiodes are sensitive to only one color:
  • Red
  • Green
  • Blue
That's the well known RGB schema also used in (classic) TV screens.



Now what's the difference then between a photodiode and a pixel ? There's a lot of confusion out there and some don't differentiate at all between the two. Personally I prefer to think of the following definition: A pixel can be calculated based on data obtained from three or more photodiodes (R + B + G + ...). Have a look at the following explanation for details.

"Bayer"-type Image sensors

Image sensors with Bayer interpolation are currently the most popular variant of the species. "Bayer" sensors are layered out in (overlapping) clusters of 4 photodiodes. A cluster is based on 2 green-, 1 red- and 1 blue-sensitive photodiode (to be precise these are "plain" photodiodes with color filters). An individual pixel can be "Bayer-interpolated" based on data from 4 photodiodes (obtaining the necessary RGB information). Now why's that an interpolation ? Looking at the illustration below you may notice that the 2 right hand photodiodes of Pixel #1 are shared with Pixel #2. Therefore there's a certain amount of redundancy within the system. So while you may interpolate e.g. 6 mio. pixels ((xmax-1) * (ymax-1) to be nit-picky) from 6 mio. photodiodes the effective resolution is actually a little lower than that.


Bayer Interpolation (simplified)

Lately a few variations to this theme emerged:

Sony replaced one of the 2 green photodiodes of the 4-diode cluster with a new photodiode featuring an emerald filter (therefore RGBE). Similar to what we've seen on the inkjet printer market the idea is to improve color accuracy and according to the first reviews this new approach seems to work out pretty well.



classic RGB
Bayer Sensor
RGBE Bayer Sensor
(Sony)

Another interesting variation has been developed by Fuji:
Current digital cameras still suffer from a limited dynamic range (=range between deep black and bright white) resulting in a very limited ability to resolve details in shadows or highlights. Fuji found a workaround for this problem by adding a secondary (smaller) area to a photodiode with a lower light sensitivity. This trick is supposed to provide up to 2 extra stops in dynamic range compared to conventional photodiodes. 
Looking at the illustration below you also may observe another difference compared to the classic layout - the photodiodes have an octagonal shape with an honeycomb layout resulting in a potentially higher density because the photodiodes of two neighbouring rows can overlap a little. Due to this special layout Fuji S3 Pro interpolates their max. output to 12 megapixels based on 6 mio. photodiodes. While this may be a little optimistic the "Super CCD" layout seems to be indeed superior regarding the effective resolution compared to Bayer-sensors with the same number of photodiodes.



classic photodiode layout
Fuji "Super CDD" (SR) layout

The next image illustrates the vertical structure of Fuji's Super CCD (HR) sensor - apart from the Fuji-style layout and form of the photodiode the layers are typical for image sensors (incl. Foveon) used in D-SLRs. The surface layer is made of microlenses. Why's that ? The photodiodes are slightly recessed and if light rays don't hit the photodiodes in a perpendicular angle, and that's typical for the image edges, they produce slight shadows resulting in a lower exposure. On a macro level (the chip) the exposure may be uneven from the center towards the edges. Microlenses are meant to correct this problem (and recently manufacturers started to adjust their lens designs due to this issue).
The 2nd layer consists of the photodiodes as described above whereas the third layer cares about the actual data transfer.


The "Foveon" sensor

As mentioned above Bayer-type sensors are, by a vast margin, the most popular sensors around but there's a little company rivaling the big ones - "Foveon". Foveon took advantage of an effect well known to B&W photographers (and also scuba divers): Using filters you can block one color whereas other colors remain unaffected. So they simply stacked three semi-transparent layers of photodiodes rather than spreading them like on Bayer-type sensors.



Foveon Layout
An individual Foveon pixel
(made of three layered Photodiodes)

This approach has some advantages and a few drawbacks. On the up side you can list that an individual pixel location produces one "true" color& luminosity information - there's no interpolation needed and yes, images tend to be quite a bit sharper compared to conventional Bayer sensors of the same pixel resolution (output). As a nice side effect the photodiodes COULD also be much (3x) bigger thus resulting in less sensor noise - POTENTIALLY that is which is not the case as of now. 
On the downside you need a LOT (3x) more pixels in order to achieve the same output-/pixel-resolution compared to Bayer-sensors. The Sigma SD10 has e.g. 10.2 mio. photodiodes - that's more compared to an Canon EOS 10D or Nikon D100 with just 6 mio. photodiodes and almost as much as on an Canon EOS 1Ds with its 11mio photodiodes. However, as these pixels are stacked the effective output resolution is really just 3.4 mio. pixels (10.2 mio. / 3 layers) and that's not too impressive from the perspective of today's Bayer sensors.
So while the Foveon solution has a much better potential on the long run they've a marketing problem as of now - a 6 megapixel Canon EOS 10D just sounds more sexy than a similar-priced 3.4 megapixel Sigma SD10.

© by photozone.de