How Video Processing Brings the Best Picture to Any Display

Over the past few weeks, we’ve been taking a closer look at the technology that goes into an audio, video and lighting system and examining the different parts of the AVL signal chain. In the series, we’re following the signal chain through source, processing, distribution, output and control. So far, we’ve looked at live capture and play sources, as well audio processing. This time, we’re turning our focus to video processing, the technology tools that manipulate the video signal to get it ready for distribution and output.

There is a variety of different ways that video signals can be processed, and which ones are involved varies greatly depending on the application. For commercial video distribution applications, for example, the goal is typically to preserve the original video quality as much as possible. In these case, the designer tries to manipulate the video as little as possible. Processing only occurs in these situations when the source video needs to be converted into a signal that the output display can handle. In other situations, the applications are more creative, and the video signals intentionally altered to achieve a certain look.

Video Scaling

In commercial video distribution, the video processing performed typically falls under the category called “scaling.” In video processing, scaling is the general term for altering the core traits of the video signal itself. A common trait of the video signal that is manipulated is the resolution—the specific number of pixels in a video signal. Typically measured in width times height (such as 1920 x 1080), scaling will increase or decrease the number of pixels in the source signal so that it matches the appropriate resolution of the display. Because the resolution of the video can increase as part of the process, the technology is called “scaling” (though a scaler addresses other factors beyond resolution).

It’s important to note that increasing the resolution through scaling does not add additional detail. Once the video is output from a source device, the AVL system has the maximum amount of data it can work with. Even if the resolution is increased through scaling, it doesn’t actually add more detail. This is why AVL system designers try to keep the video signal as close to the source as possible. Decreasing the resolution—as sometimes is required for certain applications—decreases the amount of detail. Scaling will address issues when the source device and display device do not match, but the system must be designed in such as way that the impact of these differences is minimal.

How AMX SmartScale works. Click to see full image.

How AMX SmartScale works. Click to see full image.

For example, AMX Enova Series video switchers have scaling on each individual output using a technology called SmartScale. Because the video processing (scaling) is performed on each output separately, a single lower-resolution display device won’t adversely affect the entire system. Nor will adding a single 4K source device and display require the entire system to accommodate that higher resolution. If there were no scaler in the system, adding a single source or display with a different resolution would impact the entire system. Even if there were a scaler only at the source but not the display, there would be no way to accommodate a single display with a lower or higher resolution.

Again, scaling accounts for other traits of the video signal other than resolution. It also addresses features such as aspect ratio (widescreen versus standard definition), frame rate (the number of individual video frames/“pictures” that play per second), color space (the amount of color information in the video) etc. With scaling, you can have video from a variety of different source devices coming into the system in a variety of different ways, and any of them can connect to the same display using a single output. HARMAN’s SmartScale technology automatically identifies the “native resolution” of the display (the ideal resolution for that display) and converts any input signal to the correct output, accounting for differences in all of these different traits.

Zoom and Crop

As we said before, the type of video processing required depends on the application, and scaling is just one of these examples. Another video processing technique found in certain applications is zoom and crop. The effect is fairly straightforward. The system designer can enlarge a portion of the video by increasing the resolution of the video signal (scale it “up”) while keeping the actual pixel count the same. Because the scaled up version of the original video now has more pixels than can be contained in the output video signal, the other pixels are discarded and the result is a “zoomed in” image. As noted before about increasing resolution through processing, this doesn’t actually add more image detail. Unless the original image has a higher resolution than the output signal, this will result in some pixilation.

To see how this works, let’s look at a common application for zoom and crop: video walls. Devices such as AMX networked video decoders can use zoom and crop processing to create video walls. For a simple two by two video wall (a total of four screens), you would have four AMX Networked AV decoders. Each decoder would zoom and crop to a particular portion of the screen. The top right would zoom and crop the image to the top right, the top left would zoom to the top left, and so on. The resulting application would recreate the source video image.

However, although the video wall would have a total of four 1080p video screens worth of pixels available, the resulting amount of detail the video wall would display would only utilize all of those pixels if the original source image contained that much information. If a source video was in 4K resolution (roughly four times the size of 1080p), the video processors in the decoders could each zoom into a single “quarter” of the 4K source image and display an image on the video wall that utilized most of the available pixels on the four 1080p screens. If the source image was 1080p, there would be no loss of resolution from the source video, but the video wouldn’t utilize all of the available pixels in the video wall.

Another common application for this approach is in LED video systems. Martin’s P3 system, for example, offers zoom and crop, allowing lighting designers to customize the video signal before it goes through lighting processing (which we’ll address next time).

Image Processing and Other Effects

Finally, a lesser-used video processing technique in AVL systems is image processing. While video scaling adjusts aspects of the video such as resolution and color detail, image processing adjusts qualities of the video image itself, and includes qualities such as brightness, contrast, color balance and gamma adjustments, among others. Returning to our previous discussion of source devices, image processing is rarely used for what we called “play” devices. Video from live generated signals from PCs, digital signage players, and other devices would be adjusted at the source, and pre-recorded video from a Blu-ray player or other device would already be adjusted properly.

Instead, image processing is most often used in conjunction with video cameras that are capturing a live event. That’s why video from these sources are most often found in entertainment applications. A designer or video engineer can process the video signal, adding image processing effects to improve the look of the video, much the way an audio engineer applies audio effects to an audio signal from a microphone.

Do you have experience with video processing technology? Share your tips and tricks in the comments.

Leave a Reply


  1. Pingback: VIDEO PLAYLIST: Understanding the AVL Signal Chain | HARMAN Professional Solutions Insights

  2. Pingback: Getting It There: Designing Your AV Distribution Infrastructure – HARMAN Professional Solutions Insights