Search This Blog

What are Image Sensors and Colors?

Q: What are Image Sensors and Colors?

A:
When photography was first invented, it could only record black & white images. The search for color was a long and arduous process, and a lot of hand coloring went on in the interim (causing one author to comment "so you have to know how to paint after all!"). One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be formed using red, blue, and green filters. He had the photographer, Thomas Sutton, photograph a tartan ribbon three times, each time with a different one of the color filters over the lens. The three images were developed and then projected onto a screen with three different projectors, each equipped with the same color filter used to take its image. When brought into register, the three images formed a full color image. Over a century later, image sensors work much the same way.

Additive Colors
Colors in a photographic image are usually based on the three primary colors red, green, and blue (RGB). This is called the additive color system because when the three colors are combined in equal quantities, they form white. This system is used whenever light is projected to form colors as it is on the display monitor (or in your eye).

The first commercially successful use of this system to capture color images was invented by the Lumerie brothers in 1903 and became know as the Autochrome process. They dyed grains of starch red, green, and blue and used them to create color images on glass plates.

Subtractive Colors
Although most cameras use the additive RGB color system, a few high-end cameras and all printers use the CMYK system. This system, called subtractive colors, uses the three primary colors Cyan, Magenta, and Yellow (hence the CMY in the name—the K stands for an extra black). When these three colors are combined in equal quantities, the result is a reflected black because
all of the colors are subtracted. The CMYK system is widely used in the printing industry, but if you plan on displaying CMYK images on the screen, they have to be converted to RGB and you lose some color accuracy in the conversion. On a printout, each pixel is formed from smaller dots of cyan, magenta, yellow, and black ink. Where thesedots overlap, various colors are formed.

It's All Black and White After AllImage sensors record only the gray scale—a series of 256 increasingly darker tones ranging from pure whiteto pure black. Basically, they only capture brightness.
How then, do sensors capture colors when all they can do is record grays? The trick is to use red, green, andblue filters to separate out the red, green and blue components of the light reflected by an object. (Likewise,the filters in a CMYK sensor will be either cyan, magenta, or yellow.)
There are a number of ways to do this,including the following:

1. Three separate image sensors can be used, each with its own filter. This way each image sensor captures the image in a
single color.
2. Three separate exposures can be made, changing the filter for each one. In this way, the three colors are "painted" onto
the sensor, one at a time.
3. Filters can be placed over individual photosites so each can capture only one of the three colors. In this way, one-third of the photo is captured in red light, one-third in blue, and one-third in green.
When three separate exposures are made through different filters, each pixel on the sensor records each color in the image and the three files are merged to form the full-color image. However, when three separate sensors are used, or when small filters are placed directly over individual photosites on the sensor, the optical resolution of the sensor is reduced by one-third. This is because each of the available photosites records only one of the three colors.
For example, on some sensors with 1.2 million photosites, 300-thousand have red filters, 300-thousand have blue, and 600-thousand have green. Does this mean the resolution is still 1.2 million, or is it now 300-thousand? Or 600-thousand? Let's see. Each site stores its captured color (as seen through the filter) as an 8-, 10-, or 12-bit value. To create a 24-, 30- , or 36-bit full-color image, interpolation is used. This form of interpolation uses the colors of neighboring pixels to calculate the two colors a photosite didn't record. By combining these two interpolated colors with the color measured by the site directly, the original color of every pixel is calculated. ("I'm bright red and the green and blue pixels around me are also bright so that must mean I'm really a white pixel.") This step is computer intensive since comparisons with as many as eight
neighboring pixels is required to perform this process properly; it also results in increased data per image so files get larger.
more to come regarding this topic..................

0 comments:

Post a Comment

Please enter you comments or your question what ever you have regarding Graphic Designing. Thanks

Blog Widget by LinkWithin