In our “Tech Talk” series, we interview the experts here at HARMAN Professional Solutions to discuss how we can solve important problems associated with AVL solutions. Today, I’m speaking with Matthew Massaro, a System Designer out of our Huntsville, Alabama office. Matthew has an interesting history. He got his start in the entertainment business with movies and television shows working on sets and as a camera operator, with his day job working as a pyro-technician at a studio’s theme park. After that, Matthew moved from movie making to movie playing, joining the digital cinema industry as a field technician. Eventually, Matthew moved into commercial AV, which led him to HARMAN, where he now focuses on networked audio and video solutions.
Given this expertise, I asked Matt how enterprise customers have embraced audio and video on the network, and how networked AV has changed the way audio and video is distributed in the building. In particular, I wanted to dive in to some of the differences between networked video technologies, such as H.264 and JPEG2000, as well as look at how these technologies can work together with networked audio technologies, such as Dante.
[SKD] Sometimes, it seems people think of networked AV, whether we’re talking audio or video, as being the same thing as analog, just digital or just on the network, but that’s not really true, right? So, why did they shift, and more importantly, what did this shift to AV over the network do to the types of applications we see in installations now?
[MM] Moving to the network was sort of the natural progression of things. Before, we had analog video and analog signals that needed to be routed over standard analog formats (coax, RGBHV, etc.), and it was natural for these to go into a central hardware infrastructure. However, because of a number of different physical factors related to this hardware, you were limited to the room. Then customers understandably wanted to move from point-to-point applications to larger room-based systems and then larger multiple-room systems or building systems. However, this was still analog, so it was still all tied together with hardware. This gave rise to analog matrix switchers and large coax-based distribution systems.
When we moved into the digital realm, it was natural to fall back on the same basic concept, because everyone was familiar with it. We still had the big black box that sat in the rack room between all of the spaces and provided distribution for all of the different rooms and spaces. It was a natural progression for it to do that.
As we transitioned to Networked AV and the idea of anything, anywhere, the conversations were still, “What do I need to do for my video distribution? What do I need to do with my digital signage? What do I need to do for my PA?” These were all separate, compartmentalized discussions, because as an industry, we considered each of these to be separate systems. It was not a natural progression to move outside of those individual boxes. Instead, it was driven by innovation and technologists asking, “Why do these need to be separate boxes? Why not have them all as a single, unified system?” That way, your networked AV system is your video distribution, your digital signage solution, your PA system, your conference AV system—it allows you to have anything, anywhere rather than be limited to single-room systems.
[SKD] Looking at video specifically, what sort of technologies have driven this shift to building-wide AV systems rather than multiple, room-based systems that might simply be connected via the network?
[MM] When video, in particular, became digital and the idea of putting video on the network was put forth, there was a big rush to H.264 as a standard. It’s still a great standard, and we use it with AMX’s SVSI N3000 Series products, but a lot of folks found that the latency and the picture quality that made it easily traversable on the network weren’t really desirable and wouldn’t allow you to really use the network for every situation. The simple fact is that video over the network is not a one-size-fits-all solution. That’s why AMX actually uses a three-tiered approach with our SVSI Series that provides choices, with different latency, quality and pricing options to give the designer multiple tools at their side. They’re not just stuck with the Phillips-head screwdriver. They have the Phillips head, the star head and the square head. They have multiple options, and each has its own benefits and detriments. This capability really makes “anything, anywhere” possible from a video standpoint.
[SKD] How specifically has H.264 limited the conversation and potential applications for Networked AV, and how do other technologies like JPEG2000 deliver on the vision of “anything, anywhere?”
[MM] One of the big things SVSI wanted to do was bring the highest quality possible to its installations that people were lacking in the industry. That’s where technologies like minimal proprietary compression (MPC) in our SVSI N1000 line and JPEG2000 compression in the N2000 line came into play. N2000 is really our workhorse, and it really launched SVSI products into the foray, especially for large-scale deployments. It has great flexibility, and JPEG2000 has cinema-grade encoding and decoding, so the quality is excellent as well. This opens up a lot of applications for networked AV where H.264 might not be an ideal choice.
JPEG2000 is also much lower latency than H.264, and because of this, you can use sources and destinations that are located anywhere on the network, opening up a lot of new use cases. It allows us to add other capabilities, such as KVM (Keyboard, Video and Mouse). Because the latency is so much lower, we can have the customer use a mouse and keyboard located at the monitor where the decoder is, pass the signal back to the PC at the source over the network, and be able to see the mouse movements in real time. With the latency of H.264, the delay between the user moving the mouse or typing on the keyboard and then seeing the action on screen would make the system feel unresponsive and unusable.
We can do this because of the increased bandwidth of modern networks. The idea behind using H.264 when it got popularized was “let’s get that bandwidth down so it works on existing networks,” and at the time, most of those networks were 10/100 networks, so they only had 100 megabits to work with. Going with anything beyond H.264 would push it up into requiring a gigabit network. Of course, now we’re in the world where most people have gigabit networks, with more and more people pushing into 10 gigabit networks and beyond as a goal moving forward. So, because today most people have gigabit inside their networks and even gigabit going outside their networks between campuses, we have more bandwidth and more flexibility.
[SKD] How have the network changes impacted audio distribution, and how does that tie in to networked video distribution?
[MM] Ever since people have seen the benefits of the network, they wanted to put audio on it, whether that be through VoIP or through streaming audio, such as iTunes, Pandora, etc. All of this gives us audio at our fingertips. So, naturally, people wanting to stream things out to a large venue, manage audio in exhibit halls, distribute audio in conference rooms and classrooms—all of these people looked at the network to see how they could get audio on it. This gave rise to a variety of different solutions to this problem, from AVB to Dante. These solutions allow audio to traverse the network, and you can get audio wherever you need it.
One of the great things about the AES67 standard is that it allows the ability for audio to go in and out of digital signal processors (DSPs) and out to video sources, amplifiers and speakers, whether they’re coming from audio-only sources or audio/video sources. It moves fluidly and, depending on the design, without additional hardware, which keeps the infrastructure costs down and the servicing costs down as well, because there are less boxes and less points of failure.
[SKD] That, to me, sounds like the ultimate goal of networked AV, but how exactly do you do that? How do you pass networked audio and networked video back and forth in a seamless way?
[MM] One of the things that we typically try to do is separate the audio and video. With H.264, the audio is embedded with the video, and to extract the audio and then try to re-embed it later adds additional latency to video that already has a lot of latency to begin with, making the approach not very practical for many applications.
Because audio with JPEG2000 is not embedded with the video, we have the ability send the audio directly to a DSP, and then either send it to an amplifier directly or sync it up with the video again later. The beauty is that the audio and video are both sitting on the network, so you have the option at the decoder to pull the audio from any source on the network. If you want to use audio from a different digital source, from another analog source using our audio transceiver, or from the network using AES67, you can do that, and you aren’t limited to the confined architecture of a matrix switcher.
This is really the whole ecosystem and the flexibility that networked AV brings to it, with the ability to pull things off without additional wiring or even using analog audio inputs or outputs as you need. It really gives you that well-rounded portfolio to provide the right solution with the right technology for each customer.
This was some great insight from Matthew. I asked Matthew what he did outside of work, and he explained that he was a big fan of the outdoors, with a fondness for hiking, biking and hunting. He likes to stay active and keeps busy. He is also a self-described “gadget head” and a recording enthusiast. He has his own home audio studio, where he works on his own electronic music. “I tinker with synthesizers and have more keyboards than I probably should,” Matthew told me.
Do you have experience with networked audio and video distribution? How do you feel the move to AV on the network has changed what is possible for commercial applications? Let us know in the comments.