In his NY Times “Bits” blog, Steve Lohr writes today of Netflix’s $1 million contest to improve its software that predicts what movies customers will like. The winner and runner-up were both multi-national alliances of 7 and 30 people, respectively. One of the leaders of the runner-up team had an interesting observation:
Yet the sort of sophisticated teamwork deployed in the Netflix contest, it seems, is a tricky business. Over three years, thousands of teams from 186 countries made submissions. Yet only two could breach the 10-percent hurdle. “Having these big collaborations may be great for innovation, but it’s very, very difficult,” said Greg McAlpin, a software consultant and a leader of the Ensemble. “Out of thousands, you have only two that succeeded. The big lesson for me was that most of those collaborations don’t work. ” [Emphasis added.]
Of course, that was then — in future contests, such as the new one Netflix is starting, we can expect teams to try to glean lessons from the successes and failures of the first one.
Oh. My. God.
The leader of the runner-up team’s argument is a poorly-drafted strawman. He’s pissed that he didn’t win and wants to blame it on “big collaborations” that “don’t work.”
Um… just because only 2 of the teams were able to meet Netflix’s requirements doesn’t mean that the “big collaborations” didn’t work. They simply weren’t successful at meeting the requirements. This could be the result of any number of things – Neflix’s already-robust algorithm, the nature of the challenge itself (refining movie suggestions), even the intelligence of the teams working the problem.
The real lesson is that if you want to play or compete in the open market, there are going to be times where you DON’T WIN … and where there are very few potential winners from all of the actual participants.