What is the dynamic range of a camera, and how can it benefit a photographer? How to capture all the tones of the scene you are shooting.

photosensitive sensors of cameras. In this regard, it was also said about the so-called. (film or matrix does not matter).

Now consider the concept dynamic range from a physical point of view, i.e. based on the design of the matrix of a digital camera.

Dynamic range of the CCD matrix.

In order for the sensor to be sensitive to a wide range of illumination of the subject being photographed, i.e., to be able to reproduce both its dark (shadow) sides and light (brightness) sides adequately and proportionally, each pixel must have a potential well of sufficient capacity. Such a potential well should be capable of holding a minimum charge when exposed to light from a dimly illuminated part of the object, and at the same time could accommodate a large charge if the illumination of part of the object is high.

This ability of a potential well to accumulate and hold a charge of a certain magnitude is called the depth of the potential well. It is the depth of the potential well that is determined by the matrix.


Schematic illustration of lateral drainage.

The use of drainage leads to a more complex design of CCD elements, but this is justified by the damage caused to the image due to blooming.

Another problem that worsens the quality of the image obtained by a CCD matrix is ​​the so-called. stuck pixels (stuck pixels), we often call them “broken”. These pixels appear at any shutter speed, unlike noise, which is chaotic in nature, and are localized in the same place. They are associated with poorly manufactured CCD elements, in which, even with a minimum illumination time, an avalanche-like breakdown of electrons into a potential well occurs. They appear in each picture in the form of dots that differ significantly in color from those located nearby.

Dynamic range is the ratio of the maximum permissible value of the measured value (brightness for each channel) to the minimum value (noise level). In photography, dynamic range is usually measured in exposure units (step, stop, EV), i.e. base 2 logarithm, less often - the decimal logarithm (denoted by the letter D). 1EV = 0.3D. Occasionally a linear designation is also used, for example 1:1000, which is equal to 3D or almost 10EV.

The characteristic "dynamic range" is also used for file formats used to record photographs. In this case, it is assigned by the authors of a specific file format, based on the purposes for which this format will be used. For example, DD

The term "dynamic range" is sometimes wrong call any ratio of brightness in a photograph:

  • the ratio of the brightness of the lightest and darkest objects in the photograph
  • the maximum ratio of the brightness of white and black colors on the monitor/photo paper (the correct English term is contrast ratio)
  • range of film optical densities
  • other, even more exotic options

The dynamic range of modern digital cameras at the beginning of 2008 ranges from 7-8 EV for compact cameras to 10-12 EV for digital SLR cameras (see tests of modern cameras at http://dpreview.com). At the same time, it is necessary to remember that the matrix transmits shooting objects with different qualities, details in the shadows are distorted by noise, and in the highlights they are transmitted very well. The maximum DD of DSLRs is available only when shooting in RAW; when converting to JPEG, the camera cuts off details, reducing the range to 7.5-8.5EV (depending on the camera’s contrast settings).

The dynamic range of camera files and matrices is often confused with the number of bits used to record information, but there is no direct connection between these quantities. Therefore, for example, the DD of Radiance HDR (32 bits per pixel) is greater than 16-bit RGB (photo latitude), which shows the range of brightness that the film can convey without distortion, with equal contrast (the brightness range of the linear part of the film characteristic curve). The full DD of the film is usually somewhat wider than the photo latitude and is visible on the graph of the characteristic curve of the film.

The photo latitude of a slide is 5-6EV, a professional negative is about 9EV, an amateur negative is 10EV, a film is up to 14EV.

Dynamic range expansion

The dynamic range of modern cameras and films is not enough to convey any scene of the surrounding world. This is especially noticeable when shooting on a slide or a compact digital camera, which often cannot convey even a bright daytime landscape in central Russia if there are objects in the shadows (and the brightness range of a night scene with artificial lighting and deep shadows can reach up to 20EV). This problem can be solved in two ways:

  • increasing the dynamic range of cameras (video cameras for surveillance systems have a noticeably larger dynamic range than still cameras, but this is achieved by deteriorating other characteristics of the camera; every year new models of professional cameras are released with better characteristics, while their dynamic range is slowly increasing)
  • combining images taken at different exposures (HDR technology in photography), resulting in a single image containing all the details from all the original images, both in the extreme shadows and in the maximum highlights.

File:HDRIexample.jpg

HDRi photograph and three photographs from which it is compiled

Both paths require solving two problems:

  • Selecting a file format in which you can record an image with an extended brightness range (regular 8-bit sRGB files are not suitable for this). Today the most popular formats are Radiance HDR, Open EXR, as well as Microsoft HD Photo, Adobe Photoshop PSD, RAW files from SLR digital cameras with a large dynamic range.
  • Displaying a photograph with a large range of brightness on monitors and photographic paper that have a significantly smaller maximum range of brightness (contrast ratio). This problem can be solved using one of two methods:
    • tone mapping, which reduces a large range of luminances into a small range of paper, monitor, or 8-bit sRGB file by reducing the contrast of the entire image, uniformly for all pixels in the image;
    • tone mapping (tone mapping), which produces a non-linear change in pixel brightness by different amounts for different areas of the image, while maintaining (or even increasing) the original contrast, but shadows may look unnaturally light, and halos may appear in the photograph boundaries of areas with different brightness changes.

Tone mapping can also be used to process images with a small range of brightness to increase local contrast.

Due to the ability of tone mapping to produce “fantastic” images in the style of computer games, and the mass presentation of such photos with the sign “HDR” (even obtained from a single image with a small range of brightness), most professional photographers and experienced amateurs have developed a strong aversion to dynamic expansion technology range due to the misconception that it is needed to obtain such pictures (the above example shows the use of HDR methods to obtain a normal realistic image).

see also

Links

  • Definitions of basic concepts:
    • TSB, article “photographic latitude”
    • Gorokhov P.K. “Explanatory dictionary of radio electronics. Basic terms" - M.: Rus. lang., 1993
  • Photo latitude of films and DD cameras
    • http://www.kodak.com/global/en/professional/support/techPubs/e4035/e4035.jhtml?id=0.2.26.14.7.16.12.4&lc=en
  • File formats:

Wikimedia Foundation. 2010.

See what “Dynamic range in photography” is in other dictionaries:

    Dynamic range: Dynamic range (technique) is a characteristic of a device or system designed to convert, transmit or store a certain quantity (power, force, voltage, sound pressure, representing the logarithm ... ... Wikipedia

    Dynamic range is a characteristic of a device or system designed to convert, transmit or store a certain quantity (power, force, voltage, sound pressure, etc.), representing the logarithm of the ratio of maximum and ... ... Wikipedia

    This term has other meanings, see Dynamic range. Dynamic range is a characteristic of a device or system designed to convert, transmit or store a certain quantity (power, force, voltage, sound... ... Wikipedia

    Photographic latitude is a characteristic of photosensitive material (photographic film, transmitting television tube, matrix) in photography, television and cinema. Determines the ability of a photosensitive material to correctly transmit brightness... ... Wikipedia

    Contrast in the most general sense, any significant or noticeable difference (for example, “Russia is a country of contrasts...”, “contrast of impressions,” “contrast of the taste of dumplings and the broth around them”), not necessarily measured quantitatively. Contrast degree... Wikipedia

    To improve this article, it is desirable?: Find and arrange in the form of footnotes links to authoritative sources confirming what is written ... Wikipedia

    This term has other meanings, see HDR. High Dynamic Range Imaging, HDRI or simply HDR is a general name for image and video technologies whose brightness range exceeds the capabilities of standard technologies. More often... ... Wikipedia

    This article should be Wikified. Please format it according to the rules for formatting articles... Wikipedia

    Wikipedia has n... Wikipedia

    - (lat. redactus put in order) changing the original image using classical or digital methods. It can also be referred to by the term retouching, retouching (French retoucher to paint on, correct). The purpose of editing... ... Wikipedia

November 16, 2009

Wide dynamic range video cameras

Wide dynamic range (WDR) cameras are designed to provide high-quality images in backlight conditions and in the presence of both very bright and very dark areas and details in the frame. This prevents bright areas from being saturated and dark areas from appearing too dark. Such cameras are usually recommended for monitoring an object located opposite windows, in a door or gate illuminated from behind, and also when there is a high contrast of objects.

The dynamic range of a video camera is usually defined as the ratio of the brightest part of an image to the darkest part of the same image, that is, within a single frame. This ratio is otherwise called maximum image contrast.

Dynamic range problem

Unfortunately, the actual dynamic range of video cameras is strictly limited. It is significantly narrower than the dynamic range of most real objects, landscapes, and even film and photographic scenes. In addition, the conditions for using surveillance cameras in terms of lighting are often far from optimal. Thus, objects of interest to us may be located against the backdrop of brightly lit walls and objects or backlight. In this case, objects or their details in the image will be too dark, since the video camera automatically adapts to the high average brightness of the frame. In some situations, the observed “picture” may contain bright spots with too large gradations of brightness , which are difficult to convey with standard cameras. For example, an ordinary street in sunlight and with shadows from houses has a contrast from 300:1 to 500:1, for dark passages of arches or gates with a sunlit background the contrast reaches 10,000:1, the inside of a dark room against windows has a contrast of up to 100,000:1.

The width of the resulting dynamic range is limited by several factors: the ranges of the sensor itself (photodetector), the processing processor (DSP) and the display (video control device). Typical CCDs (CCD matrices) have a maximum contrast of no more than 1000:1 (60 dB) in intensity. The darkest signal is limited by thermal noise or "dark current" of the sensor. The brightest signal is limited by the amount of charge that can be stored in an individual pixel. Typically CCDs are built so that this charge is approximately 1000 dark charges due to the temperature of the CCD.

Dynamic range can be significantly increased for special camera applications, such as scientific or astronomical research, by cooling the CCD and using special reading and processing systems. However, such methods, being very expensive, cannot be widely used.

As stated above, many tasks require a dynamic range size of 65-75 dB (1:1800-1:5600), so when displaying a scene even with a range of 60 dB, details in dark areas will be lost in noise, and details in bright areas will be lost. for saturation, or the range will be cut off on both sides at once. Readout systems, analog amplifiers, and analog-to-digital converters (ADCs) for real-time video limit the CCD signal to a dynamic range of 8 bits (48 dB). This range can be expanded to 10-14 bits through the use of appropriate ADCs and analog signal processing. However, this solution often turns out to be impractical.

Another alternative type of circuit uses a nonlinear transform in the form of a logarithmic function or its approximation to compress the 60 dB CCD output signal to an 8-bit range. Typically, such methods suppress image detail.

The last (mentioned above) limiting factor is the display of the image. The dynamic range for a normal CRT monitor operating in a bright room is about 100 (40 dB). An LCD monitor is even more limited. A signal generated by the video path, even limited to a contrast of 1:200, will be reduced in dynamic range when displayed. To optimize display, the user must often adjust the monitor's contrast and brightness. And if he wants to get an image with maximum contrast, he will have to sacrifice some of the dynamic range.

Standard solutions

There are two main technology solutions that are used to provide cameras with high dynamic range:

  • multiple frame display - the video camera captures several complete images or individual areas of it. Moreover, each “picture” displays a different area of ​​the dynamic range. The camera then combines these different images to produce a single high dynamic range (WDR) image;
  • the use of nonlinear, usually logarithmic, sensors - in this case, the degree of sensitivity at different lighting levels is different, which allows for a wide dynamic range of image brightness in one frame.

Various combinations of these two technologies are used, but the most common is the first.

To obtain one optimal image from several, 2 methods are used:

  • parallel display by two or more sensors of an image formed by a common optical system. In this case, each sensor captures a different part of the dynamic range of the scene due to different exposure times (accumulation), different optical attenuation in the individual optical path, or through the use of sensors of different sensitivities;
  • sequential image display by a single sensor with different exposure (accumulation) times. In extreme cases, at least two displays are made: one with a maximum and one with a shorter accumulation time.

Sequential display, as the simplest solution, is commonly used in industry. Long-term accumulation ensures the visibility of the darkest parts of the object, but the brightest fragments may not be processed and even lead to saturation of the photodetector. A picture obtained with low accumulation adequately displays light fragments of the image without processing dark areas that are at the noise level. The camera's image signal processor combines both images, taking the bright parts from the "short" picture and the dark parts from the "long" picture. The combination algorithm that allows you to create a smooth image without a seam is quite complex, and we will not touch on it here.

The first to present the concept of combining two digital images obtained at different acquisition times into a single image with a wide dynamic range was a group of developers led by Professor I.I. Zivi from Tech-nion, Israel. In 1988, the concept was patented ("Wide Dynamic Range Camera" by Y.Y. Zeevi, R. Ginosar and O. Hilsenrath), and in 1993 it was used to create a commercial medical video camera.


Modern technical solutions

In modern cameras, to expand the dynamic range based on obtaining two images, Sony double scan matrices (Double Scan CCD) ICX 212 (NTSC), ICX213 (PAL) and special image processors, for example SS-2WD or SS-3WD, are mainly used. It is noteworthy that such matrices cannot be found in the SONY assortment and not all manufacturers indicate their use. In Fig. 1 schematically represents the principle of double accumulation. Time is indicated in NTSC format.

The diagrams show that if a typical camera accumulates a field of 1/60 s (PAL-1/50 s), then a WDR camera compiles a field of two images obtained by accumulation in 1/120 s (PAL-1/100 s) for few illuminated details and over a period of 1/120 to 1/4000 s for highly illuminated details. Photo 1 shows frames with different exposures and the result of summation (processing) of the WDR mode.

This technology allows you to “bring” the dynamic range to 60-65 dB. Unfortunately, numerical WDR values, as a rule, are provided only by manufacturers in the upper price category, while the rest are limited to information about the availability of the function. The existing adjustment is usually graduated in relative units. Photo 2 shows an example of comparative testing of a standard and WDR camera for oncoming light from a glass display case and doors. There are camera models whose documentation states that they operate in WDR mode, but there is no mention of the required special element base. In this case, the question may naturally arise, is the declared WDR mode what we expect? The question is fair, since even cell phones already use an automatic brightness control mode for the image of the built-in camera, called WDR. On the other hand, there are models with a declared dynamic range expansion mode, called Easy Wide-D or EDR, which work with standard CCDs. If in this case the expansion value is indicated, then it does not exceed 20-26 dB. One way to expand the dynamic range is Panasonic's Super Dinamic III technology. It is also based on double exposure of the frame at 1/60 s (1/50C-PAL) and 1/8000 s (followed by histogram analysis, dividing the image into four options with different gamma correction and their intelligent summation in the DSP). In Fig. Figure 2 presents the generalized structure of this technology. Such a system expands the dynamic range up to 128 times (42 dB).

The most promising technology for expanding the dynamic range of a television camera today is the Digital Pixel System™ (DPS) technology, developed at Stanford University in the 1990s. and patented by PIXIM Inc. The main innovation for DPS is the use of an ADC to convert the photocharge value into its digital value directly in each pixel of the sensor. CMOS (CMOS) sensor matrices prevent signal quality degradation, which increases the overall signal-to-noise ratio. DPS technology allows signal processing in real time.

PIXIM technology uses a technique known as multisampling to produce the highest image quality and provide a wide dynamic range of the transducer (light/signal). PIXIM DPS technology uses five-level multisampling, which allows you to receive a signal from the sensor with one of five exposure values. During exposure, the illumination value of each pixel of the frame is measured (for a standard video signal - 50 times per second). The image processing system determines the optimal exposure time and stores the resulting value until the pixel becomes oversaturated and further charge accumulation stops. Rice. 3 explains the principle of adaptive accumulation. The light pixel value is retained at exposure time T3 (before the pixel is 100% saturated). The dark pixel accumulated charge more slowly, which required additional time; its value was retained at time T6. The stored values ​​(intensity, time, noise level) measured in each pixel are simultaneously processed and converted into a high-quality image. Since each pixel has its own built-in ADC and the light parameters are measured and processed independently, each pixel effectively acts as a separate camera.


PIXIM imaging systems based on DPS technology consist of a digital image sensor and an image processor. Modern digital sensors use 14 and even 17 bit quantization. Relatively low sensitivity, as the main disadvantage of CMOS technology, is also characteristic of DPS. The typical sensitivity of cameras of this technology is ~1 lux. The typical value of the signal-to-noise ratio for the 1/3" format is 48-50 dB. The declared maximum dynamic range is up to 120 dB with a typical value of 90-95 dB. The ability to regulate the accumulation time for each pixel of the sensor matrix allows the use of such a unique signal processing method, as a method of aligning local histograms, which allows you to dramatically increase the information content of the image. The technology allows you to completely compensate for background illumination, highlight details, evaluate the spatial position of objects and details located not only in the foreground, but also in the background of the image. In photo 3, Figures 4 and 5 show frames obtained with a typical CCD camera and a PIXIM camera.

Practice

So, we can conclude that today, if it is necessary to conduct video surveillance in difficult conditions of high-contrast lighting, it is possible to select a television camera that adequately conveys the entire range of brightness of objects. For this purpose, it is most preferable to use video cameras with PIXIM technology. Quite good results are provided by systems based on double scanning. As a compromise, we can consider cheap television cameras based on standard matrices and electronic systems EWD and multi-zone BLC. Naturally, it is desirable to use equipment with specified characteristics, and not just with a mention of the presence of a particular mode. Unfortunately, in practice, the results of specific models do not always correspond to expectations and advertising statements. But this is a topic for another discussion.

© 2014 site

Or photographic latitude photographic material is the ratio between the maximum and minimum exposure values ​​that can be correctly captured in the photograph. When applied to digital photography, dynamic range is actually equivalent to the ratio of the maximum and minimum possible values ​​of the useful electrical signal generated by the photosensor during exposure.

Dynamic range is measured in exposure stops (). Each step corresponds to doubling the amount of light. So, for example, if a certain camera has a dynamic range of 8 EV, this means that the maximum possible value of the useful signal of its matrix is ​​related to the minimum as 2 8: 1, which means that the camera is able to capture objects that differ in brightness within one frame no more than 256 times. More precisely, it can capture objects with any brightness, but objects whose brightness exceeds the maximum permissible value will appear dazzling white in the image, and objects whose brightness is below the minimum value will appear pitch black. Details and texture will be visible only on those objects whose brightness falls within the dynamic range of the camera.

To describe the relationship between the brightness of the lightest and darkest objects being photographed, the not entirely correct term “scene dynamic range” is often used. It would be more correct to talk about the brightness range or the contrast level, since dynamic range is usually a characteristic of the measuring device (in this case, the matrix of a digital camera).

Unfortunately, the brightness range of many beautiful scenes we encounter in real life can significantly exceed the dynamic range of a digital camera. In such cases, the photographer is forced to decide which objects should be worked out in full detail, and which can be left outside the dynamic range without compromising the creative intent. In order to make the most of your camera's dynamic range, you may sometimes need not so much a thorough understanding of how the photosensor works, but rather a developed artistic sense.

Factors limiting dynamic range

The lower limit of the dynamic range is set by the self-noise level of the photosensor. Even an unlit matrix generates a background electrical signal called dark noise. Also, interference occurs when charge is transferred to the analog-to-digital converter, and the ADC itself introduces a certain error into the digitized signal - the so-called. sampling noise.

If you take a photo in complete darkness or with a lens cap on, the camera will only record this meaningless noise. If a minimal amount of light is allowed to reach the sensor, the photodiodes will begin to accumulate an electrical charge. The magnitude of the charge, and hence the intensity of the useful signal, will be proportional to the number of captured photons. In order for any meaningful details to appear in the image, it is necessary that the level of the useful signal exceeds the level of background noise.

Thus, the lower limit of the dynamic range or, in other words, the sensor sensitivity threshold can be formally defined as the level of the output signal at which the signal-to-noise ratio is greater than unity.

The upper limit of the dynamic range is determined by the capacitance of an individual photodiode. If during exposure any photodiode accumulates an electric charge of its maximum value, then the image pixel corresponding to the overloaded photodiode will turn out completely white, and further irradiation will not affect its brightness in any way. This phenomenon is called clipping. The higher the overload capacity of a photodiode, the greater the output signal it can produce before it reaches saturation.

For greater clarity, let us turn to the characteristic curve, which is a graph of the output signal versus exposure. The horizontal axis represents the binary logarithm of the radiation received by the sensor, and the vertical axis represents the binary logarithm of the magnitude of the electrical signal generated by the sensor in response to this radiation. My drawing is largely conventional and serves purely illustrative purposes. The characteristic curve of a real photosensor has a slightly more complex shape, and the noise level is rarely so high.

The graph clearly shows two critical turning points: in the first of them, the level of the useful signal crosses the noise threshold, and in the second, the photodiodes reach saturation. The exposure values ​​that lie between these two points make up the dynamic range. In this abstract example, it is equal, as is easy to see, to 5 EV, i.e. The camera can handle five doublings of exposure, which is equivalent to a 32-fold (2 5 = 32) difference in brightness.

The exposure zones that make up the dynamic range are unequal. The upper zones have a higher signal-to-noise ratio, and therefore appear cleaner and more detailed than the lower ones. As a result, the upper limit of the dynamic range is very significant and noticeable - clipping cuts off light at the slightest overexposure, while the lower limit is inconspicuously drowned in noise, and the transition to black is not nearly as sharp as to white.

The linear dependence of the signal on exposure, as well as the sharp rise to a plateau, are unique features of the digital photographic process. For comparison, take a look at the characteristic characteristic curve of traditional photographic film.

The shape of the curve and especially the angle of inclination strongly depend on the type of film and on the procedure for its development, but the main, striking difference between the film graph and the digital one remains unchanged - the nonlinear nature of the dependence of the optical density of the film on the exposure value.

The lower limit of the photographic latitude of negative film is determined by the density of the veil, and the upper limit is determined by the maximum achievable optical density of the photographic layer; for reversible films it is the other way around. Both in the shadows and in the highlights, smooth bends in the characteristic curve are observed, indicating a drop in contrast when approaching the boundaries of the dynamic range, because the slope of the curve is proportional to the contrast of the image. Thus, the exposure zones lying in the middle part of the graph have maximum contrast, while in the highlights and shadows the contrast is reduced. In practice, the difference between film and a digital matrix is ​​especially noticeable in the highlights: where in a digital image the highlights are burned out by clipping, on film the details are still visible, although low in contrast, and the transition to pure white looks smooth and natural.

In sensitometry, even two independent terms are used: actually photographic latitude, limited by a relatively linear portion of the characteristic curve, and useful photographic latitude, which, in addition to the linear section, also includes the base and shoulder of the chart.

It is noteworthy that when processing digital photographs, as a rule, a more or less pronounced S-shaped curve is applied to them, increasing the contrast in midtones at the cost of reducing it in shadows and highlights, which gives the digital image a more natural and pleasing appearance to the eye.

Bit depth

Unlike the matrix of a digital camera, human vision is characterized by, let's say, a logarithmic view of the world. Successive doublings of the amount of light are perceived by us as equal changes in brightness. Light numbers can even be compared to musical octaves, because double changes in sound frequency are perceived by ear as a single musical interval. Other senses work on this principle. Nonlinearity of perception greatly expands the range of human sensitivity to stimuli of varying intensity.

When converting a RAW file (it doesn’t matter - using the camera or in a RAW converter) containing linear data, the so-called. gamma curve, which is designed to non-linearly increase the brightness of a digital image, bringing it into line with the characteristics of human vision.

With linear conversion, the image is too dark.

After gamma correction, the brightness returns to normal.

The gamma curve stretches dark tones and compresses light ones, making the distribution of gradations more uniform. The result is a natural-looking image, but noise and sampling artifacts in the shadows inevitably become more noticeable, which is only exacerbated by the small number of brightness levels in the lower zones.

Linear distribution of brightness gradations.
Uniform distribution after applying the gamma curve.

ISO and dynamic range

Despite the fact that digital photography uses the same concept of photosensitivity of photographic material as in film photography, it should be understood that this happens solely due to tradition, since the approaches to changing photosensitivity in digital and film photography are fundamentally different.

Increasing ISO sensitivity in traditional photography means replacing one film with another with coarser grain, i.e. There is an objective change in the properties of the photographic material itself. In a digital camera, the light sensitivity of the sensor is strictly determined by its physical characteristics and cannot be changed in the literal sense. When increasing ISO, the camera does not change the actual sensitivity of the sensor, but only amplifies the electrical signal generated by the sensor in response to irradiation and adjusts the digitization algorithm for this signal accordingly.

An important consequence of this is that the effective dynamic range decreases in proportion to the increase in ISO, because along with the useful signal, noise also increases. If at ISO 100 the entire range of signal values ​​is digitized - from zero to the saturation point, then at ISO 200 only half the capacity of the photodiodes is taken as the maximum. With each doubling of ISO sensitivity, the top step of the dynamic range is cut off, and the remaining steps are pulled into its place. This is why using ultra-high ISO values ​​makes no practical sense. With the same success, you can lighten the photo in a RAW converter and get a comparable noise level. The difference between increasing the ISO and artificially brightening the image is that when increasing the ISO, the signal is amplified before it enters the ADC, which means that quantization noise is not amplified, unlike the sensor’s own noise, while in a RAW converter it is subject to amplification including ADC errors. In addition, reducing the sampling range means more accurate sampling of the remaining input signal values.

By the way, lowering the ISO below the base value (for example, to ISO 50), available on some devices, does not at all expand the dynamic range, but simply attenuates the signal by half, which is equivalent to darkening the image in the RAW converter. This function can even be considered harmful, since using a subminimum ISO value provokes the camera to increase the exposure, which, while the sensor saturation threshold remains unchanged, increases the risk of clipping in the highlights.

True Dynamic Range

There are a number of programs like (DxO Analyzer, Imatest, RawDigger, etc.) that allow you to measure the dynamic range of a digital camera at home. In principle, this is not very necessary, since data for most cameras can be freely found on the Internet, for example, on the website DxOMark.com.

Should we believe the results of such tests? Quite. With the only caveat that all these tests determine the effective or, so to speak, technical dynamic range, i.e. the relationship between the saturation level and the noise level of the matrix. For a photographer, the most important thing is the useful dynamic range, i.e. the number of exposure zones that really allow you to capture some useful information.

As you remember, the dynamic range threshold is set by the noise level of the photosensor. The problem is that in practice, the lower zones, which are technically already included in the dynamic range, still contain too much noise to be usefully used. Here a lot depends on individual disgust - everyone determines the acceptable noise level for themselves.

My subjective opinion is that details in the shadows begin to look more or less decent when the signal-to-noise ratio is at least eight. On this basis, I define useful dynamic range as technical dynamic range minus about three stops.

For example, if a DSLR camera, according to reliable tests, has a dynamic range of 13 EV, which is very good by today's standards, then its useful dynamic range will be about 10 EV, which, in general, is also quite good. Of course, we are talking about shooting in RAW, with minimum ISO and maximum bit depth. When shooting JPEG, dynamic range is highly dependent on contrast settings, but on average you should give up another two or three stops.

For comparison: color reversal films have a useful photographic latitude of 5-6 stops; black and white negative films give 9-10 stops with standard developing and printing procedures, and with certain manipulations - up to 16-18 stops.

To summarize the above, let's try to formulate a few simple rules, the observance of which will help you squeeze maximum performance out of your camera's sensor:

  • The dynamic range of a digital camera is only fully accessible when shooting in RAW.
  • Dynamic range decreases as light sensitivity increases, so avoid high ISO settings unless absolutely necessary.
  • Using a higher bit depth for RAW files does not increase true dynamic range, but it does improve tonal separation in the shadows due to more brightness levels.
  • Exposure to the right. The upper exposure zones always contain the maximum useful information with a minimum of noise and should be used most effectively. At the same time, do not forget about the danger of clipping - pixels that have reached saturation are absolutely useless.

And most importantly: don't worry too much about the dynamic range of your camera. Its dynamic range is fine. Your ability to see light and manage exposure correctly is much more important. A good photographer will not complain about the lack of photographic latitude, but will try to wait for more comfortable lighting, or change the angle, or use the flash, in a word, will act in accordance with the circumstances. I'll tell you more: some scenes only benefit from the fact that they do not fit into the dynamic range of the camera. Often an unnecessary abundance of details simply needs to be hidden in a semi-abstract black silhouette, which makes the photo both more laconic and richer.

High contrast is not always a bad thing – you just need to know how to work with it. Learn to exploit the shortcomings of the equipment as well as its advantages, and you will be surprised how much your creative possibilities will expand.

Thank you for your attention!

Vasily A.

Post scriptum

If you found the article useful and informative, you can kindly support the project by making a contribution to its development. If you didn’t like the article, but you have thoughts on how to make it better, your criticism will be accepted with no less gratitude.

Please remember that this article is subject to copyright. Reprinting and quoting are permissible provided there is a valid link to the source, and the text used must not be distorted or modified in any way.

Good afternoon friends!

Today we continue our acquaintance with. He provided a link where he gave an overview of the operating principle of cameras. Next, we will dwell in more detail on individual elements that, in general terms, a photographer should have an understanding of. If you come across definitions or terms that are unclear to you, it’s okay, just keep reading and you will definitely understand the essence. I'm sure of it! But it is the general understanding that is important.

The article is quite voluminous, so for ease of navigation I have compiled the contents for you :)

Matrix in the camera. What it is?

The matrix in the camera is the main element with which we obtain the image. Also often called a sensor or transducer. It is a microcircuit consisting of photodiodes - photosensitive elements. Depending on the intensity of the incident light, the photodiode generates an electrical signal of different sizes, which is subsequently converted to digital using a separate ADC or built into the matrix.

The matrix captures light and turns it into a set of bits (0/1), which then forms a digital image.

It looks like this:

Matrix in the camera

The shiny rectangular plate in the center is what it is. And around the edges of the photograph.

Discrete matrix structure

The basis is very small photodiodes or phototransistors, which capture light and turn it into an electrical signal. One such photodiode forms one pixel of the output digital image.

A little digression for those who may not know. A digital image consists of many dots that our brain “glues” together into a complete picture. If there are not enough such points, we will begin to notice the discreteness of the structure, in other words, it will seem as if the image is “disintegrating”, being mosaic, smooth transitions will disappear.

Let's look at a photo of a dog.

Discrete matrix structure using a dog as an example

Don't mind now that it's in black and white. Abstract from the concept of color, this is a different topic, at the moment it will be better to perceive information. The matrix records an electrical signal of varying magnitude depending on the light intensity. And, if you take away the special filters designed to produce a color image, then the output photo turns out to be black and white. By the way, cameras that shoot exclusively in black and white also exist.

Schematically applied a grid to the image illustrating a discrete one, i.e. discontinuous matrix structure. Each square illustrates the minimum element of the matrix - a pixel formed by a photodiode, which receives light of the Nth intensity and is converted into a digital image pixel of the Nth brightness at the output. For example, the upper left corner is dark, which means that little light fell on this area of ​​the matrix. The wool, on the contrary, is light, which means that more light got there and the electrical signal was different. Naturally, the image consists of a much larger number of squares, here is only a schematic image.

Matrix - analogue of film

Previously, when there were no digital cameras, film was used as a light-sensitive element, that is, a matrix. In principle, the design of a film camera is not too different from a digital one; the latter has more electronics, but the “receiver” of light is completely different.

When you press the shutter button on a film camera, the shutter opens and light hits the film. Before the shutter closes, a chemical reaction occurs, the result of which is an image stored on the film, but invisible to the eye until it is developed. An example of such a chemical process is the decomposition of silver halide into halogen and silver atoms.

As you can see, the essence itself is completely different. I am writing this so that you remember that in the modern world the matrix performs the functions of a film, i.e. forms an image. By the way, the difference between them is in storage: the film is directly the storage location for the final image; in digital photography, the image is stored on memory cards.

Sensor exposure

An important term that photographers often use. It refers to the process of taking a photograph. Those. when you pressed the shutter button, the shutter opened and light began to fall on the matrix, they say that it is being exposed. It continues until the shutter closes.

You may hear the phrases “during exposure...”, “the process of exposure...”, “during exposure...”. Usually the word “matrix” is omitted and they simply say exposure.

Matrix characteristics

It is necessary to be aware that matrices differ greatly from each other, and in different price ranges they have certain qualities. This element can be considered the “heart” of the camera, like the engine in a car or the processor in a computer. Although no machine or computer will work with just an engine or processor, these elements nevertheless determine the potential of the system. It is difficult to expect that a car with a small engine will be able to demonstrate miracles of agility in racing. It’s the same with cameras - in the budget range they are equipped with matrices with limited capabilities, and it’s difficult to expect a silent picture from them when shooting at long exposures. It is clear that there are characteristics that categorize matrices according to capabilities. Let's move on to their consideration.

First, a list of the main characteristics:

  • physical size;
  • permission;
  • signal-to-noise ratio;
  • ISO sensitivity;
  • dynamic range
  • matrix type (obsolete).

Now let's look at everything in detail.

Physical size of the camera matrix

The matrix is ​​a rectangular plate that collects light and is naturally sized. Above, we looked at the discrete structure of the matrix, where we realized that it consists of pixels, which in the physical sense are photocells that convert incoming light into electrical charges.

Accordingly, the physical size of the matrix is ​​determined by the size of the pixels and the distance between them. The greater the distance between the pixels, which is an insulating layer, the less heating the matrix will be, the higher the signal-to-noise ratio and the cleaner the output picture.

Let's move on. The matrix size is one of the most important parameters that you should definitely pay attention to. For novice photographers, I’ll simply note that matrix size is its most important characteristic.

In practice, it is marked in millimeters, either as a format designation, or in inches of the sensor diagonal. A format is simply the name of a matrix with certain dimensions. They call it that for simplicity. As for inches, the story goes back to measuring the image area on tubular TVs. It is written, for example, like this: 1/1.8″. You should not make mathematical calculations with the goal of determining the physical size of the diagonal and calculating the dimensions of the sides. It is simply a notation and has no mathematical force. It is only important to understand that a matrix with a diagonal of 1/2.7″ is noticeably smaller than one with a diagonal of 1/1.8″. Here are the popular sizes:

What does matrix size affect?

The larger the matrix size, the better

This is not always the case, and one can argue with the statement, but in general it is true. More experienced readers are looking forward to the transition of the topic into the holivar channel “Crop vs full frame” :) I won’t indulge their desires now, because we are talking about fundamental things! Let's get back to the topic.

It depends on the matrix size:

  1. image noise;
  2. dynamic range;
  3. color depth;
  4. camera dimensions.

Indirectly, with a change in the size of the matrix, the depth of field and viewing angle change, because To get a picture at the same scale, you have to change other parameters (focal length, distance to the subject).

The larger the matrix, the:

  • Less noisy image. Physicists will say that the more light that hits the surface that captures it, the less heating, the less error in quantization and, therefore, the less influence of extraneous noise. The image under the same conditions turns out to be more “clean” and detailed. The final image will contain less unnecessary information caused by noise. Now for a more practical definition. With the same number of pixels and the same technology, the larger the matrix, the less noise there will be in the image when shooting in low light. Simply put, there will be fewer extraneous dots in the photo that interfere with viewing. For example, if you intend to shoot handheld twilight portraits, it is preferable to have a camera with a large sensor. The smaller the matrix, the less isolating elements between the pixels. For this reason, increased heating occurs, which is always bad in electronics, the signal-to-noise ratio deteriorates and the amount of noise in the resulting image increases in comparison with models with large matrices. Let's look at an example:
    On the left is an image obtained from a camera with a larger matrix, on the right – from a smaller one. The shooting conditions are the same. Enlarge the image. Just look at the sky. The difference may vary, but the trend will continue (provided that the matrices are similar in technology and generation). In practice, noise is clearly visible in the highlights, and by pulling out the shadows by the same amount, you can get a cleaner picture on a camera with a larger sensor. By stretching we mean increasing the exposure in the editor, in this case in the shadows - details begin to appear in them. If you prefer the following genres: evening/night landscapes, portraits during regime time when there is not much light, dynamic reportage photography, pay attention to the noise level of the matrix of the selected camera. In terms of size, it is advisable to choose cameras with matrices starting from APS-C format.
  • Wider dynamic range(more on this later in the article).
  • More color depth. Color depth is a measure of how small color changes a camera can detect. Those. with a greater color depth, minor transitions in halftones will look more natural and closer to what is visible to the eye. More halftone information will be recorded. This manifests itself, for example, in almost monochromatic landscapes.
  • More camera. It is an indisputable fact that if you want to shoot with a camera with a larger sensor, you will have to put up with its increased size. Taking a look at the camera market, it becomes clear that, for example, there are no small full-frame cameras, although they are trying to make them. And mobile photography is limited by sensor size.
  • Greater viewing angle we can get ceteris paribus.
    The size of the matrix does not affect the viewing angle!!! The perspective obtained with the same lens mounted on different cameras will be different. But with the same EGF (equivalent focal length), the image will be approximately the same. If the concepts of perspective and EGF don’t mean much to you, that’s okay, just read on, I’ll tell you the important essence “at a glance.” If you take the same lens, then when you shoot on a camera with a larger sensor, you will get a wider view. Let's take the approximation of objects when shooting with a camera with a larger matrix as 100%. Then the same lens on a smaller matrix will provide an approximation of >100% (the approximation will be a multiple of the reduction in the size of the matrix). The same effect can be simulated by cutting out part of the frame from a photograph (shot on a large matrix) and stretching it to its original size. In other words, a boy who was photographed with a 35 mm lens on a camera with an APS-C matrix (look at the matrix size table) will be closer than the same boy photographed with the same lens, but with a full-frame (FF) sensor. The sun on the horizon, shot with a smaller sensor, will be “closer” to us:
  • Less depth of field can be obtained, all other things being equal.. This is another interesting aspect that confuses photographers and needs to be addressed. Looking ahead, DOF (depth of field) determines at what distance from the focusing point objects will be in the sharpness zone. The size of the matrix does not affect the depth of field!!! But in order for the image scale to be the same on different cameras at the same focal lengths, on cameras with smaller matrices you will have to move further away or change the focal length, which in turn affects the depth of field, increasing it. Therefore, on cameras with large matrices it is easier to get “blurry” photos.

This is not all, but the main points that are critical for the photographer, which are directly or indirectly affected by the size of the camera’s matrix and which you need to clearly understand for yourself.

Matrix type

Defines the principle by which the matrix works. There were two main technologies:

  • CMOS (CMOS - complementary logic on transistors);
  • CCD (CCD - charge-coupled device).

Matrices based on both technologies accumulate light. Only in the first the smallest structural element is a diode, in the second it is a transistor.

As for image quality, at the time of widespread use of both technologies, it was believed that CCD matrices had a more pleasant, “lamp” color, while CMOS made less noise, but the noise structure was different.

Today, the vast majority of cameras are equipped with CMOS type matrices, which are less noisy and more energy efficient. Therefore, there is no question of choosing according to this parameter. This is just a reminder when using outdated cameras.

Matrix sensitivity. ISO

The ratio of the selected exposure and the output image parameters depends on the sensitivity of the matrix. Simply put, the higher you set the sensitivity (changed in the camera settings), the less lit elements you will be able to register. But at the same time the noise will increase. The ISO parameter is taken as the equivalent sensitivity parameter. It starts from 50 - this is the minimum sensitivity at which the image is as clear as possible and not subject to noise destruction. The change step is formed by multiplying by 2. I.e. the next ISO sensitivity is 100, then 200, 400, 800, 1600, 3200, 6400... Of course, cameras also shoot at intermediate values, for example, 546. But for convenience, steps in feet are considered as described above. Don't worry too much about ISO, stops, etc. right now.

It is important to understand that when shooting the same scene (for example, a tree at dusk), when increasing ISO its brightness will increase. The picture will appear lighter. It is also important to understand that on a camera with a larger matrix with the sameThere will be less ISO noise.

Further for those who want to know more. There is such a concept - EI (exposure index). It determines the relationship between the signal transmitted from the matrix and the parameters of its conversion into color space. What does it allow? With the same exposure settings, we are able to obtain images of varying brightness.

Entering the matrix, light forms a signal (output voltage), which is converted into color space in the ADC. The most common is sRGB. At the same time, it is strengthened. If the signal is weak, you need to amplify it more. EI becomes different. Cameras preset a preset range of EI values, called ISO for simplicity. It came from the film world and is now used for convenience. The range depends on the capabilities of the matrix. For example, on older DSLRs it was not possible to set ISO 6400 simply for the reason that at such sensitivity the image quality would become unacceptable due to noise. Next, about strengthening a weak signal.

Signal to noise ratio

The next characteristic of the matrix, inextricably linked with sensitivity, is the signal-to-noise ratio. I think the point is already clear to you. In simple terms, this ratio determines how much useful signal (light from the subject you are shooting) and noise will be contained in the final image.

We said above that when light hits the matrix, its photocells generate signals in the form of an output voltage. Let's say we get a voltage of 0.2 V. Let this, for example, correspond to pure green color according to the sRGB space at ISO 200. By closing the aperture or making the shutter speed shorter, we reduce the light flux entering the matrix. The voltage on the matrix will become not 0.2, but 0.1 V (for example, of course). Which, at a given ISO 200, will correspond not to pure green, but to a darker green with dirty impurities. If we set the camera to ISO 400, the voltage will automatically rise to 0.2 V, and we will get the original pure green color.

BUT! At the same time, a bad component in the form of noise is formed on the matrix, which is not noticeable at base ISO. But by amplifying the signal, we also amplify the noise. Within reasonable limits, this is acceptable and not critical. It is important to understand the line when a subsequent increase in sensitivity and, accordingly, signal-to-noise ratio leads to unacceptable results.

Let's say you take pictures of friends to publish personal photos on social networks. They are not too concerned with the impeccable quality of photographs and want to get great emotions, bright and pleasant pictures. In this case, small or even significant noise, corrected in the editor, will not become a problem. But, if you photograph a landscape and then want to print it out in size 30x40 cm or larger, then it is better to initially set the minimum possible ISO. In principle, when shooting landscapes, stick to the rule of initially setting the minimum ISO. Just set it and forget it, then work with the other parameters.

Signal to noise also depends on pixel size. Therefore, let's move on to the next parameter.

Matrix resolution

A popular parameter that is still used as the main one in some stores.

In the technical documentation you can see, for example, 6000 x 4000. This means that there are 6000 photocells that capture light in width, and 4000 in height. Multiplying, we get the total number of photocells (pixels) on the matrix - 24,000,000. For readability, they write 24 MP . Dimension – megapixels. The prefix “mega” corresponds to the power of 10 to the 6th power.

More megapixels is not better

Modern cameras are usually equipped with matrices from 16 MP and higher. But now it’s not uncommon to have 36 MP and 42 MP. There are also models with higher resolution. This is the traditional marketing ploy that used to “catch” buyers, and even now, by offering to buy high-resolution cameras, “forgetting” about the pitfalls involved and not being interested in the buyer’s goals at all. Let's dig a little deeper and take an interest in the pixel size.

The physical size of a pixel is a very important characteristic, measured in mm or microns. If the pixel is larger, then it will be able to collect more light, and the signal-to-noise ratio will be higher with all the ensuing consequences. Those. such a matrix, other things being equal, will make less noise.

It's very easy to determine. Let's take the popular APS-C format matrix with a resolution of 24 MP, which corresponds to a physical size of approximately 23.6 x 15.8 mm. The resolution in pixels is 6000 x 4000. This means that along the long side, 6000 pixels of our output image are formed at 23.6 mm. We divide the physical distance by the number of points and get a pixel size of approximately 0.004 mm. If a matrix of the same generation, similar structure and physical size has a higher resolution, then the pixel size will be smaller, which will increase heating and noise. They say that heating by about 8 degrees leads to a twofold increase in noise.

Practical Features of Pixel Size:

  1. Noises. As has been discussed many times, all other things being equal, smaller pixel = more noise.
  2. Increased movement. A smaller pixel is more sensitive to hand shake and camera movement relative to the subject. Imagine that a pixel is the size of a tennis ball, and you are filming a cat. A tennis ball-shaped pixel captures the light corresponding to a dark spot on the cat's fur. If you move the matrix with such pixels a little, then light from the same dark spot will most likely fall on this pixel. The offset will not cause global problems in the image. Let's assume that we are filming the same cat with a camera with a matrix with small pixels, and a lint from the cat's spot falls on a certain pixel. By moving the camera a little, it turns out that the pixel will capture another lint. Thus, the detail increases, but the image becomes blurry. This is better suited for certain purposes, but requires greater skills from the photographer and has its own characteristics when shooting certain genres.
  3. Increased requirements for the lens. The smaller physical pixel size means that to obtain a detailed photograph, the lens resolution must be higher. The lens also has a resolution, and it can project a limited number of points onto each millimeter of the matrix. More expensive lenses have greater resolution. Moreover, if the resolution of the lens is lower than that of the matrix, then the image will not be detailed enough. They say that “the matrix will not open.” In fact, the system is not balanced and the result will be the same as with a cheaper, but balanced technique. The resolution of the camera, as an integral system, does not exceed the resolution of each of the components (matrix or lens). Ideally, their resolution should be approximately equal. But practice, as usual, makes a lot of adjustments.
  4. More resolution means more powerful computer hardware. The higher the resolution, the greater the demands placed on the computer during processing. If you want to get good results and don’t even shoot in RAW (I advise you to still switch to RAW), then you will still have to “twist” the image in Photoshop or another editor. And at 24MP, 36MP or higher resolutions this can be a problem. Even if small edits are made quickly enough, small delays in a large photoset will be annoying and waste time.

Sensor dynamic range

Dynamic range (abbreviated as DD) determines the maximum brightness range of an image.

Each pixel has its own brightness. To simplify things, we will consider the brightness of individual parts of the frame, for example, the sky. Let's say you're shooting a cityscape on a bright sunny day, and there's a bright sky and very dark buildings in the frame. If you expose the frame to the sky, the result is a well-detailed sky and dark or almost black buildings. On the contrary, by exposing to buildings, we get their normal brightness, but there is absolutely no sky, instead there is a white spot. Have you encountered such a situation? I think for sure.

So the dynamic range just determines how wide the brightness area the camera can cover without losing information in the lightest and darkest parts of the frame.

Dynamic range is a constant characteristic of the matrix, depending on the production technology. We can only narrow it down by setting the ISO sensitivity to a high value, which, as you can imagine, is undesirable.

In this photo there are quite dark areas at the bottom, and bright sun rays at the top, and the shooting is done in backlight, against the sun. These are obviously difficult conditions for the camera; the contrast is too high.

And here is an even more striking example with a knocked out sky. In fact, a classic, there is plenty of this in many people’s folders, and something needs to be done about it.

Insufficient dynamic range of the matrix

In this case, they say that the scene being filmed does not fit into the dynamic range of the camera. And you need to resort to either recomposing the frame to reduce the contrast of the scene, or using artistic techniques to play up the shortcomings of the technique, or using the technique of its expansion (HDR). You might reasonably ask: “But we see both the blue sky and dark details at the same time. How so?". This fact can be attributed to the imperfection of technology. The dynamic range of the eye exceeds the range of the camera by about 2 times.

Let's summarize

I would like to immediately dispel your doubts. The purpose of this article is to give you an understanding of what works and how it works. Don’t be discouraged if a lot of things are unclear - the main thing is to create “shelves” in your head, a structure, and then fill them with information as needed. But the material is certainly important and is the backbone for understanding photography. Therefore, if nothing is clear at all, re-read it again or return to it later. And especially for you, I will make a short excerpt from what it is advisable to put in your head:

  1. The matrix is ​​one of the most important elements in the camera, which captures light, turning it into electrical signals. Cannot be replaced in the camera. It is analogous to film in film cameras.
  2. The process of taking a photo while the shutter is open is called exposure.
  3. The matrix has many characteristics. Size is one of the most important; it can indirectly determine other parameters. As a class of car, you don’t expect a huge amount of space from a B-class sedan, as in an E-class sedan, no matter how advanced and expensive it may be.
  4. When choosing a camera with a particular matrix size, you should understand its advantages and disadvantages and be ready to use them. The small matrix suffers the most in conditions where there is not enough light. If you are planning to develop in the field of photography and you really like it, I advise you to pay attention to the Micro 4/3 format or choose the APS-C option.
  5. A high-quality matrix is ​​the key to a good image. When choosing a camera, you need to start with it. On the other hand, there is no need to rush to extremes either - an expensive full-frame camera with a cheap lens is unlikely to bring good results. More precisely, it will be worse than it could be. But today you need to look for a camera with a frankly bad matrix.
  6. Don't go for high resolution. Even the minimum in modern cameras will be beyond your eyes.
  7. In general, by priority, which is important for obtaining a high-quality image, . I recommend reading it if you haven't already. If you have the impression that technical parameters trump creativity, this article will show you otherwise, leading you to believe that balance is important. A shift towards the creative side is possible. But a shift towards technophilia does not lead to anything good in terms of results.

And of course, I am at your service! I am always ready to answer all possible questions within my competence in the comments.

Views