Summary
The issue described is not a bandwidth or CPU resource constraint, but rather a network transport failure causing UDP packet loss. The Jitsi Meet system interprets 4–18% packet loss as a degraded connection and proactively disables video streams to preserve the audio channel and session stability. The root cause lies in the VPS network path, likely involving MTU mismatches, carrier-grade NAT (CGNAT), or poor UDP peering, rather than the application configuration or raw server capacity.
Root Cause
The primary cause is persistent UDP packet loss on the specific VPS network link.
- High UDP Drop Rate (4–18%): Jitsi Videobridges (JVB) rely heavily on UDP for real-time media transport. When UDP packets are dropped by intermediate routers or the ISP, the bridge detects a “poor connection.”
- BWE (Bandwidth Estimation) Throttling: The JVB congestion control algorithm reduces the video bitrate to zero when packet loss exceeds the acceptable threshold (typically ~2-5%), triggering the “video disabled to save bandwidth” state.
- MTU Fragmentation Issues: The VPS network interface may be configured with a standard MTU (1500), but the underlying ISP path requires a lower MTU (e.g., 1492 or 1400). If “Don’t Fragment” (DF) bits are set, large UDP packets are silently dropped, causing the observed loss percentage.
- Asymmetric Routing/Firewall: The VPS might be receiving traffic correctly but using a different route to send traffic, causing UDP return packets to be lost or filtered.
Why This Happens in Real Systems
Real-time media protocols (WebRTC) prioritize low latency over reliability. Unlike TCP, which retransmits lost data, UDP treats packet loss as “fire and forget.”
- Protocol Sensitivity: WebRTC’s Congestion Control (e.g., Google GCC) is designed to react aggressively to packet loss to prevent network collapse. If loss > 1%, it cuts bitrate.
- VPS Network Quality Variance: High-spec VPS instances often share physical network cards. A provider might offer 3.5Gbps theoretical throughput but use aggressive traffic shaping or lower-priority queuing for UDP traffic compared to TCP.
- Stateful Firewall Limits: The VPS firewall (iptables/nftables) or the ISP’s NAT gateway might have a small UDP session table timeout, dropping “quiet” packets faster than the JVB keep-alives.
Real-World Impact
- Degraded User Experience: Participants see “Video Disabled” icons, forcing reliance on audio-only or screen sharing which may also stutter.
- False Resource Perception: Engineers waste time increasing CPU/RAM limits when the hardware is barely utilized (2% CPU).
- Migrating to “Better” Hardware: The user moved from a stable Google Cloud instance to a VPS, assuming hardware was the bottleneck. In reality, the Google Cloud network stack was masking the UDP/MTU issues that the new VPS provider exposes.
Example or Code
If you suspect MTU issues (the most common culprit for this specific symptom on VPSs), you can verify the path MTU.
Run this on the Jitsi server to test MTU connectivity to a client or the Google DNS (8.8.8.8). Adjust the size (-s) until you find the maximum non-fragmented size.
ping -M do -s 1472 8.8.8.8
If the ping above fails (Time Exceeded or No Response), try a lower size:
ping -M do -s 1400 8.8.8.8
If 1472 fails but 1400 succeeds, your network requires an MTU lower than 1500.
To fix the JVB binding specifically to a potentially better interface or to force specific candidates, you can inspect the /etc/jitsi/videobridge/sip-communicator.properties file.
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=10.0.0.100
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=203.0.113.1
(Note: Ensure the IP addresses match your server’s internal and public IPs).
How Senior Engineers Fix It
Seniors focus on the network layer first, ignoring the high-level UI flags.
- Verify UDP Connectivity: Use
tcpdumpon the server to confirm UDP packets are actually reaching the interface and being sent out.
tcpdump -i any -n udp port 10000 - Adjust MTU: Manually set the MTU on the server’s network interface (e.g.,
1450or1400) to account for VPNs (like WireGuard) or ISP overhead.
ip link set dev eth0 mtu 1450 - Enable TCP Fallback: If UDP is strictly blocked or throttled, configure Jitsi to allow the media to tunnel over TCP (slower, but works). This is done in the JVB config or Nginx WebSocket proxying.
- Isolate the Provider: Spin up a test container on the VPS and run an
iperf3UDP test against a stable remote host to measure raw UDP loss outside of Jitsi. If loss persists, contact the VPS provider immediately about network quality.
Why Juniors Miss It
Juniors typically focus on the application logic and resource metrics provided by the dashboard.
- Misreading “Bandwidth”: They see “2TB Bandwidth” and assume capacity, not realizing that packet quality (loss/jitter) matters more than total bytes transferred.
- Blaming Configuration: They tweak
lastN,simulcast, andbitratelimits. While these control how much data is sent, they cannot fix a link that drops packets. - Ignoring the “Inactive” Icon Details: The UDP loss indicator (4-18%) in the “inactive” user stats is the smoking gun, but juniors often dismiss it as a symptom rather than the root cause.
- Hardware Bias: They assume “more CPU” or “more RAM” will fix a network stutter, failing to diagnose the infrastructure layer.