Skip to main content

The live event data network is one of the most underestimated and underspecified elements in modern production infrastructure. As AV-over-IP protocols have migrated from broadcast facilities into touring and live event production — carrying audio, video, control data, and show-critical communications over the same ethernet backbone — the network has transitioned from a convenience infrastructure into a mission-critical system component whose failure is catastrophic. Yet it continues to be treated as an afterthought on productions that would never dream of shipping without a backup console or a redundant LED processor.

The history of this transition is relatively short. In the early 2000s, CobraNet became one of the first audio-over-ethernet protocols to achieve meaningful industry adoption, demonstrating that networks could carry production audio reliably. Dante, introduced by Audinate in 2006, changed the equation entirely — offering low-latency, sample-accurate audio distribution over standard ethernet that quickly became the dominant professional audio networking protocol globally. The proliferation of NDI, SDVoE, SMPTE 2110, and AES67 has since expanded IP-based signal distribution across video, intercom, and control systems, making the network the central nervous system of a modern live production.

Understand What Is Actually Running on Your Network

The first step toward a stable production network is a complete network traffic inventory. This means documenting every device that will be connected, what protocols it runs, what bandwidth it requires, what latency tolerance it has, and what happens to the production if it loses connectivity. The answer to that last question defines the criticality tier for each device and determines its priority in the network architecture.

On a typical large-format live event, the network might simultaneously carry: Dante audio (latency-critical, moderate bandwidth), NDI video feeds (high bandwidth, moderate latency tolerance), grandMA3 lighting control (low bandwidth, extremely latency-critical), show control OSC traffic (low bandwidth, latency-critical), streaming encoder output (high bandwidth, some latency tolerance), stage monitoring from Focusrite RedNet or Audient iD units, and IT production services like file transfer, email, and web browsing. These traffic types have incompatible requirements that cannot coexist on a single unmanaged network without causing each other performance problems.

VLAN Segmentation Is Non-Negotiable

The foundational architecture for a stable production network is VLAN (Virtual Local Area Network) segmentation. VLANs divide a single physical network infrastructure into multiple logically isolated segments, each carrying only the traffic appropriate to it. Show-critical AV protocols — Dante, lighting control, show control — live on isolated VLANs that receive guaranteed bandwidth and are protected from the traffic spikes generated by non-critical services.

Properly configured VLAN segmentation means that a crew member uploading a large file on the production IT network cannot cause an audio dropout in the Dante network. It means a streaming encoder consuming 50 Mbps of bandwidth cannot starve the lighting control network of the sub-millisecond latency it requires. Without VLAN segmentation, these interference scenarios are not theoretical — they are regularly observed failure modes on poorly designed production networks.

Managed Switches: The Non-Negotiable Hardware Standard

Unmanaged network switches have no place in a professional production network. The managed switch — specifically one that supports VLAN configuration, QoS (Quality of Service) priority queuing, Spanning Tree Protocol (STP), and network monitoring — is the hardware standard for production network infrastructure.

In live event and touring production, the Luminex GigaCore series has achieved dominant adoption because it combines managed switching capability with a hardware design and configuration interface specifically optimized for AV production environments. The GigaCore 14R in particular is a touring rack standard. Cisco Catalyst series and Belden Hirschmann managed switches serve broadcast and larger fixed-installation production environments where deeper network management capability is required. The common thread is that all of these platforms give the network engineer visibility into and control over the traffic behavior that an unmanaged switch cannot provide.

QoS Priority Configuration for AV Traffic

Quality of Service (QoS) configuration is the mechanism that protects latency-critical AV traffic from being delayed by high-bandwidth but latency-tolerant traffic on the same physical infrastructure. QoS works by classifying packets according to their traffic type and applying forwarding priority rules that ensure latency-sensitive streams are processed before bulk data transfers.

For Dante-based audio networks, Audinate publishes explicit QoS configuration recommendations that specify DSCP (Differentiated Services Code Point) markings for Dante audio, PTP clock synchronization traffic, and general network traffic. Following these recommendations on every managed switch in the Dante network is a prerequisite for reliable operation at the sub-1-millisecond latency that professional audio applications require. grandMA3’s network documentation similarly specifies QoS requirements for lighting control traffic that must be observed to achieve the responsive performance that operators expect.

Physical Redundancy and Spanning Tree

Network redundancy at the physical layer means building multiple cable paths between network nodes so that a single cable failure does not drop the network. This requires Spanning Tree Protocol (STP) or Rapid Spanning Tree (RSTP) — a network protocol that prevents the switching loops that physical path redundancy would otherwise create, while enabling automatic rerouting when a path fails.

For productions running Dante in redundant mode, the physical network must include two completely separate switch systems — separate hardware, separate cable paths, ideally separate cable routes to avoid a single physical incident (a cable strike, a spilled liquid) taking out both paths simultaneously. This is a standard broadcast engineering requirement that is increasingly being adopted as baseline specification on high-stakes live event productions as their network dependency deepens.

IP Address Planning and Documentation

A production network without a documented IP addressing scheme is a network that will produce unexplained conflicts and connectivity failures at the worst possible moment. Every device on the production network should have a statically assigned IP address documented in a network register — not DHCP-assigned, which allows address changes between power cycles that break device-to-device configurations.

The network register should document device name, device type, IP address, subnet mask, VLAN assignment, MAC address, and the protocol the device is running. This document is a live operational tool — updated whenever a device is added or replaced, and available to the network engineer and the technical director during the show. Network monitoring tools like Dante Controller, MA Network Configuration for grandMA3 systems, or PRTG Network Monitor for broader infrastructure visibility give the network engineer real-time awareness of device status and traffic behavior throughout the production.

Dealing With Venue Network Infrastructure

Venue-provided network infrastructure is a variable that production teams must approach with structured skepticism. Older venues may have switches that predate managed switching as a standard, unknown IP address ranges that conflict with production equipment defaults, and network policies configured for conference use that actively interfere with AV protocol multicast traffic. Always request network infrastructure documentation from the venue in advance of load-in.

For high-stakes productions, the answer to venue network uncertainty is carrying your own complete network infrastructure. A production-owned network kit — matched managed switches, labeled cables in known lengths, a documented IP address plan, and a network engineer who configured and tested the system before it shipped — is the only reliable approach to network stability on productions where connectivity failure has show-stopping consequences. The productions that depend on finding the venue’s IT manager at 8am on show day to unlock a switch port are the productions that go dark for reasons that had nothing to do with the quality of their AV equipment or the expertise of their operators.

Leave a Reply