Most consumer digital cameras have a "Bayer sensor", in which 1/4 of the pixels are sensitive to blue, 1/4 sensitive to red, and 1/2 sensitive to green. A few digital cameras have a relatively new type of sensor that can capture three color values at each pixel. These cameras are commonly known by the marketing name "Foveon". While the number of pixels in a Foveon sensor is typically smaller than in a Bayer sensor, the total number of color values measured is often similar, because each Foveon pixel measures three color values.
In the photographic community, there is a great deal of debate about which sensor is superior. I have attempted to answer to my own satisfaction one question: For a given number of photosites, and under idealized conditions, does a Bayer or a Foveon-type sensor give the impression of a more detailed image? To do so, I used synthetically generated images from the POV-Ray raytracer.
In POV, one or more rays are cast from within the pixels of a rectangular grid. The color of the pixel is calculated based on the intersection of that ray with objects in the scene. It is possible (actually, it is the default) to render the scene with a "perfect" lens and sensor: there is no depth-of-field effect, blurring, chromatic abberation, or sensor noise. Furthermore, scene descriptions are at a high level (e.g., there is a sphere of radius R and color C centered at the point P), so they may be rendered at a variety of pixel resolutions with no advantage to a particular resolution.
If exactly one ray is cast from the center of each photosite, the effect is that of an infinitely small photosite with no low-pass filter. If more than one ray is cast from a number of different locations within the pixel, the effect is that of a photosite that fills the pixel, still with no low-pass filter---this is called antialiasing. If a blur filter is applied to the pixels as a postprocessing step, the effect is similar to an external low-pass filter.
To simulate a Foveon-type sensor, an antialiased image of the given size is generated.
To simulate a Bayer-type sensor, an antialiased image of the given size is
generated. A small amount of blur is applied, and then a special-purpose
program is run which produces a greyscale output image from a color input
image. The output has the same number of pixels but one third of the amount
of data. The repeating pattern
is used to decide which of the three primary colors from a given input pixel
is written to the output. This resulting greyscale image is given as
input to 'dcraw', which performs interpolation and outputs a new color image.
R
G
G
B
I chose to generate images in the resolution 1024x768 for the full-color sensor (2.36 million photosites) and 1740x1305 for the bayer sensor (2.27 million photosites, 4% fewer than the full-color sensor). The 4:3 ratio was chosen because most POV-Ray scene files are written to produce files with that aspect ratio.
Before I ran these tests, I had an expectation about the results: The bayer sensor image would look better in high-detail areas than the full-color sensor. However, the bayer sensor would have some degree of color abberation on these high-contrast edges. My expectation was based on the following logic: Most of the information in the image is contained in the "G" (green) pixels. In fact, using the "G" channel alone often gives a decent greyscale version of an image. The distance between the centers of two neighboring "G" photosites is 1/1024 of an image width in the full-color sensor, and sqrt(2)/1740 ~ 1/1230 of an image width in the bayer sensor. More "G" samples per image width will produce a sharper image.
In my next entry, I will show the results of my tests and provide source code
for the bayerizing program and the changes to dcraw needed to treat a .pgm
input file as 8-bit bayer sensor data.
Entry first conceived on 23 October 2006, 2:16 UTC, last modified on 15 January 2012, 3:46 UTC
Website Copyright © 2004-2024 Jeff Epler