Design Requirements for Audio and Video Sources, Part Two: Play Devices

In previous posts, we’ve been taking a closer look at how an integrated AVL system works (and how different parts of the chain impacts the overall AVL design). We started with source devices. Source devices, we said, could be divided generally into devices that capture events happening live in the room (which we looked at last time) and devices that “play.”

AMX Enzo is a flexible content platform for meeting rooms that makes it easy to instantly access and share information.

AMX Enzo is a flexible content platform for meeting rooms that makes it easy to instantly access and share information.

Of course, “play” is a pretty wide concept, so we can further break that down into two categories: devices that play back a recording and devices that play live-generated audio and/or video signals. The first group could be a Blu-ray player, CD player, streaming media player, media server, MP3 player… there are a lot of things that fall into the playback group. The most common device in the live generated category would be a PC, though it could also be a keyboard, a digital signage player or a content-sharing device like AMX Enzo.

You may have noticed by now that play devices present a new challenge that’s a bit different from capture devices. We pointed out last time that capture devices are either microphones that capture audio or cameras that capture video. Some devices might have both a camera and a microphone built into the same device, but it is still essentially two different pieces of tech. Furthermore, with the notable exception of web conferencing cameras, AVL installations typically don’t use the microphones built into cameras as part of an AVL installation. Play devices, on the other hand, may output audio only (like a CD or MP3 player) or video only (like PCs and digital signage systems), but they also quite often output both audio and video. As such, the AVL system must have a way of seperating the audio and video signals out so they can go through audio and video processing, and then sync back together for output.

Playback devices and live-generated play devices each have their own set of challenges. For example, playback devices already have defined formats for the audio and video signal coming into the system. The video is output at whatever resolution the recording is, and the video processing devices must be able to ingest and handle that resolution. Similarly, the audio is output at whatever format and quality it was recorded in, so if the audio in the recording is in stereo (or a surround or immersive format), the system must be able to handle that audio input.

In addition, playback devices are often playing recordings that are made by someone who created the content for money—in other words, someone sold the content to the user for viewing and/or listening. This presents the challenge of copy protection technology, the most notable of which is High-bandwidth Digital Content Protection (HDCP). HDCP is a digital copy protection often found on movies, television shows and other forms of copyrighted recordings. HDCP protection, designed to prevent piracy, performs a “handshake” between a source device playing HDCP content and an authorized HDCP-compatible display to ensure the signal isn’t going to an unauthorized device. This is very common with Blu-ray devices, but also happens with PCs and other source devices streaming HDCP-protected video. If you anticipate playing HDCP-protected content in the system, you should be sure the video processing and distribution system supports HDCP-protected content. AMX, for example, supports HDCP protected content on both their Enova and SVSI video distribution product lines.

While playback devices have challenges related to the lack of source signal options (you take what you get), live-generated sources ironically have the opposite issue: there are many choices for potential source outputs. The issue in this case is that the system designer rarely has the chance to select the output from the source device, especially in regards to video. This issue has only been exacerbated as the “Bring Your Own Device” (or BYOD) trend has increased in popularity. Users have devices with a variety of potential resolutions, and the system must be able to handle them. With everything from lower-resolution mobile devices to 4K resolutions, the AV system must be able to handle the video users present. If a user has to downgrade to a poorer-quality resolution than they are used to/capable of, it can be an annoying experience and decrease the user’s satisfaction with the entire AV system.

As you can see, the selection of source devices you include in your AVL system can greatly affect the overall design. Much of this selection is beyond the designer’s control—it’s often good for the designer to encourage a variety of sources via BYOD and other methods. However, the AVL system should also be able to handle these various sources, which is why designing an integrated AVL solution is so important.

Do you have any tips for using “play” devices in an AVL installation? Let us know in the comments.