Summary
This postmortem analyzes a common failure mode in early-stage “music platform” projects: oversimplifying audio streaming architecture and assuming that HLS is the only viable method. The incident stems from incomplete understanding of streaming models, delivery protocols, and scalability constraints.
Root Cause
The root cause was limited awareness of the full spectrum of audio‑streaming approaches, leading to a design that was too narrow and not aligned with real-world production systems.
Key contributing factors:
- Assuming HLS is the only standard because it is widely documented
- Not distinguishing between progressive download, pseudo‑streaming, and adaptive streaming
- Underestimating the operational complexity of real streaming platforms
- Lack of exposure to CDN‑based delivery patterns
Why This Happens in Real Systems
Real systems fail in similar ways because:
- Engineers often default to the first working solution instead of surveying the ecosystem
- Audio streaming involves multiple layers (transport, buffering, codecs, caching, CDN behavior)
- Documentation online is fragmented, causing knowledge gaps
- Teams underestimate latency, bandwidth variability, and device compatibility
Real-World Impact
When teams choose the wrong streaming method:
- High latency due to large segment sizes or slow startup
- Poor user experience on mobile networks
- Server overload when using non-cache-friendly delivery
- Inability to scale beyond a few hundred concurrent listeners
- Increased storage and compute costs
Example or Code (if necessary and relevant)
Below is a minimal Spring Boot controller showing progressive download, the simplest form of audio delivery:
@GetMapping("/audio/{id}")
public ResponseEntity stream(@PathVariable String id) throws IOException {
File file = new File("/audio/" + id + ".mp3");
Resource resource = new FileSystemResource(file);
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_TYPE, "audio/mpeg")
.header(HttpHeaders.ACCEPT_RANGES, "bytes")
.body(resource);
}
How Senior Engineers Fix It
Senior engineers evaluate all streaming models and choose based on scale, latency, and device support.
1. Progressive Download
Pros
- Easiest to implement
- Works everywhere
- Cache-friendly
Cons
- No adaptive bitrate
- Slow startup on large files
2. HTTP Range Requests (Pseudo‑Streaming)
Pros
- Allows seeking
- Still simple
- CDN-friendly
Cons
- Still not adaptive
- Not ideal for unstable networks
3. HLS (HTTP Live Streaming)
Pros
- Industry standard for music and podcasts
- Adaptive bitrate
- Works well with CDNs
- Supported by iOS, Android, browsers
Cons
- Requires segmenting audio
- Slight latency
- More complex pipeline
4. DASH (Dynamic Adaptive Streaming over HTTP)
Pros
- Similar to HLS
- More flexible codec support
Cons
- Weaker iOS support
- More complex tooling
5. WebRTC (rare for audio platforms)
Pros
- Ultra-low latency
- Real-time communication
Cons
- Overkill for music platforms
- Hard to scale
- Requires STUN/TURN infrastructure
6. Custom TCP/UDP Streaming (Shoutcast/Icecast style)
Pros
- Very low latency
- Good for radio-style streams
Cons
- Not cacheable
- Harder to scale
- Not ideal for on-demand playback
7. CDN‑Accelerated Object Streaming (S3 + CloudFront)
Pros
- Extremely scalable
- Cheap
- Works with HLS, DASH, or progressive
Cons
- Requires CDN configuration
- Not a protocol by itself
Why Juniors Miss It
Junior engineers often miss these nuances because:
- They focus on implementation, not architecture
- They lack experience with CDNs, caching, and network behavior
- They assume “streaming” means “HLS” because that’s what tutorials show
- They haven’t yet seen production-scale traffic patterns
- They underestimate the importance of adaptive bitrate and seek behavior
Senior engineers succeed because they evaluate the entire delivery pipeline, not just the code.