Saturday, February 12, 2005

Discovery in the information stream

Rich Skrenta (CEO, has a long post that touches on the incremental web, streaming information based on keywords vs. topic, human vs. automated aggregators, and discovering content in the long tail. An excerpt:
    There are 4-8 million active blogs now. At this size, you can still "know" the top bloggers, and find new posts worth reading by clicking around. But when the blogosphere grows 100X or 1000X, the current discovery model will break down. You'll need algorithmic techniques like or a Findory to channel the most relevant material from the constant flood of new content.
I think it's worse than Rich says. I think the current discovery model has already broken down.

Even if you monitor just a few tens of sources, you are facing a daily stream of hundreds or thousands of articles. It's a painful, overwhelming task to manually skim it hunting for relevant content. There is precious little discovery in the current model.

No comments: