The paper does a great job of motivating the use of recommendations:
Online information services have grown too large for users to navigate without the help of automated tools such as collaborative filtering, which makes recommendations to users based on their collective past behavior.They go on to say search and browse "were labor-intensive" in a sea of 1.5M Orkut communities. The Googlers looked to recommendations to help people discover new communities.
There is an interesting and detailed evaluation of several similarity metrics. L2 (aka cosine distance) had the best performance in their tests:
We were surprised that a total order emerged among the similarity measures and that L2 vector normalization showed the best empirical results despite other measures, such as log-odds and pointwise mutual information, which we found more intuitive.Unfortunately, the paper leaves generating truly personalized recommendations -- different recommendations for each user -- to future work. They say:
All members of a given community ... see the same recommendations when visiting that community's page .... [But] just as we can estimate communities' similarity through common users, we [could] estimate users' similarity through common community memberships: i.e., user A might be similar to user B because they belong to n of the same communities.This would be where things get really interesting, building a different page for every user.
On a personal note, Ellen Spertus, the first author on the paper, was a visiting scholar at U of Washington in the AI group when I was there. Small world. It is fun to bump into her work again.
Update: The video of a June 21, 2006 talk given at Google Kirkland by Ellen Spertus on this work is now available.