What Netflix's billion dollars can't buy

What Netflix's billion dollars can't buy

Posted on: 27 December 2025

On Christmas Day 2025, Netflix streamed two NFL games to its American subscribers. For the second consecutive year, it was a technical disaster. Constant buffering, video quality degrading to what viewers described as "Nintendo 64 graphics", audio sync issues, casting failures. Half a million complaints on social media. Subscribers who had paid to watch football found themselves staring at spinning wheels.

That same evening, Amazon Prime Video streamed a third game. Zero issues.

The technical explanation is worth understanding.

Netflix has built something called Open Connect. It's a proprietary network of physical servers, called Open Connect Appliances, installed directly inside Internet Service Provider infrastructure worldwide. Over six thousand of them, scattered across the globe. Netflix has invested more than a billion dollars in this system.

Here's how it works: overnight, when internet traffic is low, these servers fill up with content that algorithms predict will be watched the following day. When a user presses play, the film or series is already there, milliseconds away. It doesn't need to cross oceans or continents. It's sitting in your provider's server room, practically in your living room.

It's an engineering masterpiece. Netflix handles fifteen percent of all American internet traffic during peak hours, and manages it without clogging major backbones because content is pre-distributed to the network's edges.

The problem is that none of this works for live content. You cannot pre-position something that doesn't exist yet.

A football game happens in real time. Every frame must be captured, encoded, segmented into small chunks, and pushed to millions of simultaneous viewers. It's a completely different problem from distributing a film.

Netflix spent fifteen years perfecting the world's best on-demand distribution system. Now they're trying to use a hammer when the job requires a screwdriver.

The cosmic irony of AWS

The irony reaches cosmic proportions when you discover that Netflix, for its entire backend infrastructure, uses AWS. Amazon Web Services. The cloud platform of their direct competitor, whose streaming service, Prime Video, has been broadcasting live sports flawlessly for years.

Amazon was built to handle catastrophic demand spikes. Black Friday, Prime Day, product launches. Their infrastructure was designed from day one to support millions of simultaneous requests for the same resource. When they started broadcasting Thursday Night Football, they already had everything they needed.

Netflix built the opposite: a system optimised to distribute different content to different users, not the same content to everyone simultaneously.

A Temple University professor summarised it brutally: "Amazon is a tech company in the true sense of the word with a cloud computing service equipped to handle millions of viewers watching live events. Netflix is more of a hybrid media distribution company whose business is obtaining access to video programming and delivering it to audiences. They didn't begin as a true tech company like Amazon."

Netflix is a media company that does technology. Amazon is a tech company that does media. It seems a subtle distinction, but when you need to stream a football game to fifty million people simultaneously, it makes all the difference.

We solved this in 2006

Reading the technical post-mortems, I found myself smiling. Not out of schadenfreude, but because of how familiar it all sounded.

In 2005, I was sitting in a café in Piazza del Popolo in Rome, surrounded by people any sensible observer would have called mad. We were discussing how to distribute digital films to cinemas. The format that would later be called DCP didn't have a name yet. We called them "encrypted files" or something similar, because that's essentially what they were.

The problem was simple to state and seemingly impossible to solve. Films weighed hundreds of gigabytes. Internet connections were what they were. And most cinemas didn't even have a decent data line. How do you get a film from a datacenter to a provincial cinema before Friday evening's screening?

That morning, the idea that would become Microcinema Spa was born.

By 2006, we had it working. Files originated from COLT Telecom's datacenter in Turin, uploaded to Eutelsat satellites via a dedicated CDN running through Turin's teleport, which had been upgraded for the Winter Olympics that year. Microsoft joined the project, contributing encryption and digital rights management expertise, recycling work from the Magex DRM project I'd worked on at Natwest and Microsoft TV, both of which had died along the way. HP provided the hardware for cinema servers and a datacenter that still makes me unreasonably proud of rooms full of servers.

We were an Italian startup inventing digital cinema distribution, somehow sitting at tables with Microsoft, HP, Eutelsat, Disney, Technicolor, Kodak. People who, on paper, shouldn't have returned our calls.

The distribution logic was identical to what Netflix now calls innovation: files pre-distributed overnight to local nodes, encrypted content arriving before consumption time, decryption keys sent at the right moment. We just used satellites instead of terrestrial internet, because in 2006 that was the only way to reach cinemas reliably.

But we also understood something that seems to have escaped Netflix: pre-distribution only works for scheduled content.

For live events, we built a second track. Standard DVB-S satellite streaming. One signal, infinite receivers. The same technology television has used for fifty years. Since Disney was the only major willing to collaborate and we needed to fill programming gaps, we integrated live opera broadcasts. The Metropolitan, La Scala, Covent Garden. It worked perfectly.

Two parallel systems on the same infrastructure. Pre-distribution for films. Broadcast streaming for live. Not because we were smarter, but because we couldn't afford to get it wrong.

The pattern

What I see here isn't incompetence. Netflix employs brilliant engineers who know exactly what they're doing. The pattern is something more insidious: excellence in one domain becoming structural blindness in another.

When you've invested fifteen years and a billion dollars building the world's best on-demand distribution system, every problem starts looking like an on-demand distribution problem. Your strength becomes your cognitive prison.

In that Roman café in 2005, we didn't have a billion dollars. We didn't have thousands of engineers. We didn't have the presumption that one solution could cover every use case.

We only had the necessity of making things work with whatever was available. And that necessity forced us to see what seems to escape those with infinite resources: different problems require different solutions.

Microcinema Spa went through various funding rounds, ownership changes, and bond issues before eventually being liquidated once it had achieved its purpose. Digital cinema distribution had become the industry standard. Mission accomplished.

Netflix will solve the problem eventually. They have the resources, the talent, the economic motivation. But watching them stumble over the same obstacle for the second consecutive year reminds me that a billion dollars can buy many things.

It cannot buy the clarity of those who have nothing yet to protect.