Troubleshooting
General Troubleshooting
CC Error Troubleshooting Guide
12 min
continuity counter (cc) errors detected on incoming streams within zixi broadcaster / zen master what are cc errors? the continuity counter (cc) is a 4 bit sequence number (0–15, wrapping) embedded in the header of every mpeg 2 transport stream (ts) packet, unique per pid each successive packet on the same pid must increment the counter by exactly 1 a cc error is triggered when that sequence breaks, which indicates one of the following true 380,380left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type cc errors are a symptom of a problem, not a root cause the troubleshooting goal is always to identify where in the signal chain the break is occurring troubleshooting workflow work through the following steps in order, from source along the stream towards the target diagrams within zen master can provide a visual overview of the workflow resolve at the earliest point of failure before proceeding situation the target customer advises they are seeing cc errors check content analysis within zen master, starting at the source within the histogram you observe cc errors compare the network histogram to see if drops/spikes align with the same time stamps as the cc errors screenshot comparing cc errors with network the event log will have useful information relevant to the cc errors like a specific pid event log showing continuity count errors step 1 confirm the cc errors are originating from the source what to check before assuming a network or zixi issue, verify whether cc errors are present before the stream even reaches zixi how if you have access to the encoder or upstream device, capture a local ts (e g via loop output or direct asi/ip tap) and analyze it with a tool such as tsreader, wireshark (mpeg plugin), or vlc in zen master, navigate to the affected input object and check the tr101 histogram under “history” if cc errors are visible immediately in the analysis for this source with no matching network anomalies (drops or spikes), the source is the likely culprit in zixi broadcaster, go to inputs → \[select stream] → stream analysis → cc error histogram and note whether cc errors are present from the moment the stream connects or only appear intermittently stream analysis icon root cause cc errors present at source before any zixi processing advice escalate to the source provider encoder/mux operator team the issue lies upstream of zixi and must be resolved at the encoding device provide them with a ts capture or cc error count with timestamps as evidence step 2 check encoder configuration what to check even if the source appears healthy, incorrect encoder settings can produce malformed ts output check the following on the encoder restart the encoder even a correctly configured encoder can begin generating cc errors after running for an extended period without a restart this is not a network or zixi issue it is an internal encoder process failure common causes include internal mux scheduler corruption memory leak in the encoder process prc clock drift accumulation thread deadlock in the mux pipeline pid counter wraparound bug bitrate stability is the encoder configured for cbr (constant bit rate)? vbr streams with aggressive rate changes can cause buffer overflows in downstream muxers, leading to dropped packets and cc errors pcr interval pcr should be injected every 20–100ms per pid misconfigured pcr spacing causes downstream sync issues that can manifest alongside cc errors mux rate overhead ensure the encoder's configured mux rate includes sufficient null packet stuffing a mux rate too close to the active content bitrate leaves no headroom and causes packet drops under load multiple audio pids streams with many audio pids (e g multilingual feeds) are more susceptible to mux scheduling issues confirm that all pids are being correctly scheduled by the encoder's internal muxer encoder firmware/software version known bugs in specific encoder firmware versions from manufacturers such as sencore, ateme, elemental, and others can generate cc errors check the manufacturer's release notes for ts related fixes update the firmware for the device for testing how to identify an encoder process issue cc errors are present on a direct tap of the encoder output (before any network or zixi involvement) cc errors affect only specific pids, particularly lower activity ones the issue began gradually or suddenly after the encoder had been running for an extended period (days/weeks) without a restart a cold restart of the encoder (full power cycle or process restart) resolves the cc errors immediately and cleanly errors reappear after the same approximate uptime interval, confirming a runtime degradation pattern root cause encoder misconfiguration or firmware bug producing cc errors at output next steps correct the encoder settings (cbr, pcr interval, mux rate) or update firmware if the errors appeared after a long runtime, perform a full encoder restart and monitor whether cc errors clear if they do and then recur after extended uptime, document the interval and raise a firmware bug report with the encoder manufacturer request a clean ts capture directly from the encoder output port to confirm before and after step 3 investigate the network path what to check cc errors introduced during transit indicate packet loss, reordering, or duplication on the network between the source and the zixi broadcaster input how in zen master, check the network quality graph for the affected stream — high packet loss percentages directly correlate with cc errors not recovered packet loss in zixi broadcaster input statistics (select a stream in broadcaster and statistics will appear at the bottom of the interface window) review packets lost any value above 0 that is not being recovered indicates network level loss arq/fec recovery rate if recovery is failing to keep up, the network loss is exceeding what the zixi protocol can correct jitter high or spiking jitter values suggest packet reordering or an unstable network path, both of which cause cc errors statistics showing jitter, packet loss, fec/arq run a parallel path test — if possible, send a test stream over a different network path and compare cc error rates for streams transiting the public internet, check whether the issue is isp specific or time of day related (indicating congestion on a shared path) root cause network packet loss, jitter, or reordering in transit next steps increase the zixi latency buffer on the input — a higher latency setting gives the arq/fec mechanism more time to recover lost packets before they are passed downstream (we recommend a minimum of at least 1000 ms ) if packet loss is severe, engage the customer's network team or isp to investigate the transport path for persistent public internet issues, we recommend the customer consider a more stable transport option (dedicated circuit, managed ip transit, or bonded links) step 4 check zixi broadcaster input buffer and latency settings what to check if the network appears healthy but cc errors are still present on the zixi input, the broadcaster's buffer configuration may not be correctly matched to the network conditions how in zixi broadcaster, navigate to inputs → \[stream] → edit and review latency this defines the size of the jitter buffer and recovery window if latency is set too low for the rtt of the connection, packets that arrive slightly late will be discarded rather than recovered, generating cc errors downstream recommended starting point set latency to at least 2–3× the measured rtt between source and broadcaster max bitrate ensure this is set high enough to accommodate the source stream's peak bitrate including ts overhead an undersized max bitrate cap will cause the stream to drop incoming packets fec / arq settings verify that the error correction mode configured on the input matches what the sending encoder supports a mismatch disables recovery silently root cause latency buffer too small for network conditions, or input misconfiguration advice increase the latency value on the input a conservative approach is to double the current value, observe cc error rate, and adjust from there step 5 check server resources on the broadcaster host what to check on high density broadcaster deployments, resource exhaustion on the host server can cause the broadcaster process to drop packets internally, generating cc errors that are not network related how ssh into the broadcaster host and check \# cpu usage top the %cpu(s) line at the top shows the breakdown of cpu usage (user, system, idle, etc ) to see usage per cpu core, press the 1 key while top is running top h o %cpu b n1>top cpu txt this command shows the cpu usage only and writes it to a text document \# network interface errors and drops ip s link show \<interface> provides detailed statistics for all network interfaces, including dedicated counters for errors and dropped packets also check the broadcaster host's nic receive buffer drops ethtool s \<interface> look for fields with names like rx dropped, rx queue 0 drops, rx discards, nic rx dropped, or similar driver dependent counters any non zero rx drop values indicate that packets are being discarded at the nic level before zixi even processes them root cause server cpu overload, memory pressure, or nic buffer exhaustion causing internal packet drops next steps reduce the number of streams on the affected broadcaster or redistribute load across multiple instances increase nic ring buffer size to increase the network interface card (nic) ring buffer size in linux and reduce packet drops, use ethtool g \<interface> rx \<size> tx \<size> to set both receive (rx) and transmit (tx) buffers, typically to a maximum of 4096 or 8192 run ethtool g \<interface> to view current and maximum supported ring buffer sizes for aws deployments, verify that the instance type is appropriately sized for the number of streams and bitrates consider elastic network adapter (ena) optimized instance types for high throughput workloads if using dpdk, validate dpdk configuration is applied correctly for the interface and kernel version in use step 6 examine the output path and downstream decoder what to check if no cc errors are reported on the zixi input but the downstream ird, decoder, or monitoring tool is reporting them, the issue may be introduced during zixi output processing or on the network segment between the broadcaster and the decoder how in broadcaster outputs , check the output statistics for the affected stream — compare cc error counts on input vs output if transcoding is active on the stream, cc errors can be introduced if the transcoder is dropping frames under load disable transcoding temporarily to isolate and test capture the ts at the broadcaster's output ip (e g via wireshark on the lan) and verify cc integrity at that point before it reaches the decoder or send a duplicate test stream to a test target and verify for irds (e g synamedia, ericsson), be aware that some devices report cc errors for packets with the discontinuity indicator set, which zixi may legitimately insert during stream reconnects or failover events confirm whether the cc errors correlate with connection events root cause cc errors introduced at output stage (transcoding, output misconfiguration) or falsely reported by decoder advice if transcoding is the cause, review transcoder resource allocation and reduce transcoding load if errors correlate with reconnects, these are expected transient events advise the customer that their decoder should tolerate the discontinuity indicator per mpeg 2 ts specification if false positives from the decoder are suspected, cross reference with a second analysis tool (tsreader, vlc) to confirm whether cc errors are genuinely present in the ts troubleshooting decision tree cc errors reported on stream present at encoder output? → yes → encoder issue (step 2) network packet loss / jitter? → yes → network / latency issue (steps 3–4) server resources exhausted? → yes → host resource issue (step 5) errors only at output / decoder? → yes → output / decoder issue (step 6) no clear cause found → capture full ts at input and output and escalate to zixi support escalation checklist when escalating to zixi support, you must include as much as possible of the following so that the team may investigate the issue broadcaster version and os stream name and connection type (push/pull, protocol used) cc error count over time (input and output statistics screenshot) with time stamps zixi broadcaster log covering the period of errors (/var/zixi/broadcaster/log/) network path description (public internet, dedicated circuit, aws region, etc ) latency and fec/arq settings on the affected input ts capture at broadcaster input (wireshark pcap or tsreader analysis if available) notes on any recent changes to encoder, network, or broadcaster configuration
