Netflix prize contest is about designing a recommendation system that can do 10% better than Netflix's own reference system. The grand price was set to $1 million. It had the buzz factor when it was launched. Very soon, people had realized that it is very difficult to achieve the 10% improvement target that was set by Netflix.
People tried several approaches. One of the very popular initial approaches was "Simon's Approach", which was almost similar to the LMS algorithm.
Researchers are trying several methods to grain incremental gain in the score. They are spending significant energy, time, and effort in achieving the target. Many of these researchers are combining their individual approaches to obtain further improvements.
Yes, after trying several approaches and tweaking the algorithm parameters, these researchers might reach the ultimate goal of 10% improvement. However, are all these approaches practical to implement? Even if these could be implemented very efficiently, do these really give good movie recommendations for the real user? Finally, can these approaches be used for recommendations in other domains?
This research has brought many teams to collaborate with each other. It has given graduate students a research topic to work on and get their thesis/publications. More importantly, this research has given a common ground for researchers to compare individual algorithms.
Finally, I'm not sure how much Netflix benefited from the practical use of the submitted algorithms and approaches. It has definitely benefited in terms of getting the right to use several algorithms in their recommendation system, and getting hundreds of researchers to work on a problem by paying a small amount (per annum per researcher).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment