While the paper focuses on personalized search for movies, the techniques discussed are applicable to other types of search.
The authors start with some motivation that probably sounds familiar to readers of this blog:
Recommender systems are widely used ... to overcome information overload ... Information retrieval systems list relevant items ... only if a user asks for it ... Recommender systems predict the needs of a user ... and recommend items ... even though the user does not specifically request it.I like the technique MAD6 uses for personalized search. They use an "item-based collaborative filtering algorithm to calculate a user's expected ratings" on items in search results, fill in any gaps with average ratings from the general population, then re-rank the items.
We build a prototype personalized movie search engine called MAD6 ... MAD6 combines both information retrieval and collaborative filtering techniques for better search and navigation.
For example, if a searcher rated Terminator and Terminator 2 highly, the personalized search results would first order the search results by relevance to the search terms and popularity, then re-rank Terminator, Terminator 2, and anything related to those two movies higher in the search results. In the example in the paper, this resulted in the top 5 search results for a query for [arnold action] being Terminator 2, Commando, True Lies, Last Action Hero, and Terminator.
As the authors report, this order was significantly different than the norm for a search for [arnold action] on other search engines. In their tests, they found their personalized rank performed very well on navigational queries -- when people already know what they are looking for -- but not as well on less directed informational queries.
The paper explains why (where GRank is a general rank that orders by popularity, PRank is the personalized search, and Web is a Web search):
When navigational queries are submitted, participants are more satisfied with PRank and Web than GRank. However, when informational queries are submitted, participants prefer GRank rather than PRank and Web.This result is unfortunate. A goal of recommender systems is to enhance discovery of unfamiliar items. If PRank is performing poorly on informational queries, it is failing at this task.
One possible explanation is that, when participants submit navigational queries, they may have very clear target movies in their minds. These movies may be their favorites and are more likely rated before the test.
However, when informational queries are submitted, participants may not have clear target movies and [fewer] returned items ... [may] be rated ... Then ... the item-based algorithm may be inaccurate due to the lack of user information ... The item-based algorithm suffers from a cold start problem. We believe users' satisfaction of PRank will increase as users provide more ratings.
This result is surprising to me though. It should be possible to tune PRank to only modify the rankings when it has sufficient evidence that the change would be an improvement, otherwise falling back to GRank. First do no harm. PRank should only make a change when the majority of people will see the change as an improvement.
More generally, it should be possible to tune the recommender to favor serendipity and enhance discovery in informational queries while also supporting re-finding in navigational queries. Serendipity largely reflects the amount of surprise in the recommendations -- pushing away from the popular and toward the unusual -- while re-finding is merely surfacing or annotating items seen before. It should be possible to do both.
A very interesting paper and a worthwhile read. I love the approach of layering an item-based recommender on top of search results to create a form of personalized search (Findory made an attempt ( ) at doing something similar in web search). By looking at specific actions, not only can the personalized search act at a finer level of detail than Google Personalized Search ( ), but also it can adapt immediately to short-term trends, what you are searching for right now.
Also, this paper, as many others, cites Sarwar et al., 2001 as the first work on item-based collaborative filtering. As I have said before, that may not be accurate.
Update: Seung-Taek Park in the comments gave an alternative location (PDF) for downloading the paper. I changed the link at the beginning of this post to point directly to the PDF file. Thanks, Seung-Taek!