Sunday, October 30, 2005

Google wants to change advertising

In the New York Times today, Saul Hansell writes that "Google Wants to Dominate Madison Avenue".

Some selected excerpts:
Eric Schmidt ... [says] that advertising should be interesting, relevant and useful to users. "Improving ad quality improves Google's revenue ... If we target the right ad to the right person at the right time and they click it, we win."

This proposition, he continued, is applicable to other media. "If we can figure out a way to improve the quality of ads on television with ads that have real value for end-users, we should do it," he said. While he is watching television, for example, "Why do I see women's clothing ads?" he said. "Why don't I see just men's clothing ads?"

[Google] says it has not connected the vast dossier of interests and behavior to specific users .... For now, the only personal information Google says it considers is the user's location.
Advertising is content; it is information about products and services. Advertising should be useful and interesting, not annoying and irrelevant.

Personalized advertising, targeted to individual interests, will be a big step forward. If we can get to the point where advertisements are helpful and relevant, telling readers about products and services they actually want to know about, people will stop ignoring ads.

Update: John Battelle has a similar article in the SJ Mercury News on personalization and personalized advertising:
You've seen the ads along the right side and top of Google -- they are usually extremely relevant to the term you typed into the search box. Why are they so good? Because Google watches what you type and tries to match ads to your stated intent.

But what if Google and others knew what you had searched for before, or other sites you had been to, or other purchases you made? Now that's custom advertising.

Imagine that the relevance of the ads, or other services offered to you, is based not just on your keywords in your searches, but also the content of your e-mail, or the knowledge of where you've been recently on the Internet, what you have done, and what you found worth your time?

Google and Yahoo are already working on these services, with the goal of being able to customize more and more to the individual, or at least to demographic or behavioral clusters of similar individuals.
Right now, advertising is annoying. It is irrelevant and useless. If I don't have a cat, I'm not interested in an ad for cat food. It doesn't matter how obnoxious you make the ad using popups, flash, or whatever annoyances you can conjure up. I'm not interested.

If you're going to have to show me advertising, at least show me something I might like. You know who I am. Don't waste my time. Show me something useful.

Thursday, October 27, 2005

Facebook and building social networks

Jeff Clavier posts some remarkable metrics on Facebook.com. Apparently, 93% of their 5M registered users visit at least once a month. They get 5.5B page views per month. Wow!

Mark Zuckerberg (Founder & CEO of Facebook) has some interesting thoughts on why his social networking site succeeded where others have failed:
[Mark] thinks that [other sites] have not focused on providing a set of utilities to their audience, they were merely about creating connections.
Exactly. Social networking sites like Orkut or Friendster have no purpose. Sure, it's fun. You go there and, in a flurry of activity, set up your profile and list all your friends. It's always good for a little ego pump.

But, then what is there to do with your social network? There's no purpose, no reason to come back, nothing to do.

Social networks should be a tool to solve some other problem. The idea should be, "Now that I have listed all my friends, I can come back to do incredibly cool thing X everyday." That's why sites like Facebook succeed where so many others have failed.

Getting the crap out of user-generated content

Xeni Jardin has an article up on Wired called "Web 2.0 Cracks Start to Show".
Web 2.0 is very open, but all that openness has its downside: When you invite the whole world to your party, inevitably someone pees in the beer.

These days, peed-in beer is everywhere. Blogs begat splogs -- junk diaries filled with keyword-rich text to lure traffic for ad revenue ... Experiments in participatory media attract goatses as quickly as they do legitimate entries, like the Los Angeles Times' experimental wiki, which was pulled after it was defaced.
Websites hosting user-generated content need to be designed with the idea that much of the content will be crap or spam.

Small sites work dandy when they're only used by early adopters. Early adopters are dedicated, so the quality of the content is high. Traffic is low, so spammers don't care about them.

As they grow, as traffic increases and the products start to attract a mainstream audience, the incentive for spam go up. Suddenly, there's a profit motive, an ability to reach a wide audience at low cost. The spam floods in.

Using captchas -- the "Are you human?" test -- is one approach to dealing with spam that is discussed in the article, but it doesn't help shovel through all the uninteresting crap that is generated by real humans.

Other techniques for getting the crap out include editor review of recent changes (Wikipedia, Craigslist), asking your users to report abuse (Craigslist, MSN Spaces, Blogger), user moderation (Slashdot), relevance rank to suppress poor content (Google Blog Search, all web search engines), and personalization to elevate content that interests you (Findory).

Any site dealing with user-generated content should be using these techniques. There is wisdom in that crowd, but you're going to have to dig to find it.

Saturday, October 22, 2005

Scoble interviews MSN Search geeks

Scoble has a one hour interview with Erik Selberg and Andy Edmonds of MSN Search.

The best parts for me were when they touched on their unusual efforts to use neural networks to improve relevance rank.

For more on that, see my previous post, "MSN Search and Learning to Rank", where I talk about a paper out of Microsoft Research by Burges et al. called "Learning to Rank using Gradient Descent".

[via Gary Price]

Friday, October 21, 2005

Personalized news in print

Anna-Maria Mende at the Editors Weblog reports that a startup in Germany will be attempting to print individually customized newspapers on demand.
The idea is to offer articles from different newspapers and magazines in one newspaper. On a website the reader chooses in the evening what he wants to read the next morning. The order goes to the printing house where the individual papers are printed.
The claim is that digital printing is fast and cheap enough to make this practical. That would be pretty interesting if true.

This kind of mass customization -- creating unique items on demand like custom clothing or logowear -- is fascinating, but there are some things you can only do online.

There is no way to do a truly personalized newspaper in the physical world. With a physical newspaper, there is no way to learn what articles individual people find interesting and change their front page in real-time like Findory does. Some opportunities only exist in the online world.

Joel on Web 2.0

I love Joel Spolsky's rant on the Web 2.0 buzzword:
The term Web 2.0 particularly bugs me. It's not a real concept. It has no meaning. It's a big, vague, nebulous cloud of pure architectural nothingness. When people use the term Web 2.0, I always feel a little bit stupider for the rest of the day.

Not only that, the very 2.0 in Web 2.0 seems carefully crafted as a way to denigrate the clueless "Web 1.0" idiots, poor children, in the same way the first round of teenagers starting dotcoms in 1999 dissed their elders with the decade's mantra, "They just don't get it!"
"A big, vague, nebulous cloud of pure architectural nothingness." Heh, heh.

But Joel is right. Technocrats can't even define the term "Web 2.0". That puts it firmly in the meaningless buzzword camp.

Update: Dare Obasanjo also flips the bozo bit on Web 2.0. He says, "The 'web 2.0' meme isn't about technology or people, it's about money and hype."

Findory growth Q3 2005

I'm a little late with this one because of my trip earlier this month, but here's the traffic graph for Findory that includes data from the latest quarter.

For the first time, Findory's growth shows some mild signs of slowing from the exponential rate we've seen in past quarters, but we're still growing at a healthy pace. Good to see it.

The graph is of total hits per quarter on the Findory.com website. Total hits on Findory.com in Q3 2005 was 9M. Findory launched early January 2004.

See also the graph from the previous quarter.

Thursday, October 20, 2005

Paul Graham's ideas for startups

Paul Graham has another excellent essay out, this one on "Ideas for Startups".

Some selected excerpts:
Startup ideas are not million dollar ideas, and here's an experiment you can try to prove it: just try to sell one ... The fact that there's no market for startup ideas suggests there's no demand ... that startup ideas are worthless.

Most startups end up nothing like the initial idea ... The main value of your initial idea is that, in the process of discovering it's broken, you'll come up with your real idea.

You have to start with a problem, then let your mind wander just far enough for new ideas to form .... Finding the problem intolerable and feeling it must be possible to solve it. Simple as it seems, that's the recipe for a lot of startup ideas.

The best way to solve a problem is often to redefine it ... Redefining the problem is a particularly juicy heuristic when you have competitors, because it's so hard for rigid-minded people to follow. You can work in plain sight and they don't realize the danger.

What you want to be able to say about technology is: it just works. How often do you say that now? Simplicity takes effort -- genius, even.

The best way to generate startup ideas is to do what hackers do for fun: cook up amusing hacks with your friends ... The best way to get a "million dollar idea" is just to do what hackers enjoy doing anyway.
Great advice from Paul Graham.

Much of this is true for Findory. We saw a problem. In the flood of information out there, people can't find the news they need. We redefined the problem. Perhaps you shouldn't have to find news, instead the news you need should come to you. We hacked away on a solution. Findory learns from what you read and helps you find other interesting articles. It's simple to use and it just works.

It's even true that the initial idea for Findory is different than what Findory is now. Findory started in personalized search and switched to personalized news.

Great stuff, Paul.

See also my earlier post, "Chris Sacca on geeks and startups", with Chris' thoughts on Paul's startup school.

Wednesday, October 19, 2005

Chris Sacca on geeks and startups

Chris Sacca (biz dev and counsel at Google) posts some thoughts on Paul Graham's startup school and adds some useful nuggets of wisdom for entrepreneurial geeks.

Some excerpts from his advice:
Don't waste a lot of time writing business plans or strategic roadmaps ... Instead of spinning wheels, just start coding.

The user experience can always be improved ... Start with what is broken today. Fix it, and you will be richly rewarded.

Stay cheap through [the] demo.

We can't forget that is actually the geeks who rule technology ... All the venture money in the world is no substitute for talent.

There is nothing like a free breakfast/lunch/dinner to bring folks together, loosen them up, and encourage sharing, debate, and brainstorming.

Open source software ... allows you to scale faster and leverages the collective expertise of developers around the world to advance your project. [It] also maximizes the chances that your code will integrate well with an eventual acquirer.
And yes, it's true, geeks do rule.

Don Dodge on Altavista and the search war

Don Dodge, former Director of Engineering at Altavista, posts some interesting thoughts on why AltaVista failed and the current battle between Google and MSN.

First, some excerpts on AltaVista:
The AltaVista experience is sad to remember. We should have been the "Google" of today. We were pure search, no frills, no consumer portal crap.

DEC is guilty of neglect in its handling of AltaVista. Compaq put a bunch of PC guys in charge who relied on McKinsey consultants and copied AOL, Excite, Yahoo and Lycos into the consumer portal game. It should have been clear that being the 5th or 6th player in the consumer portal business wouldn't work. AltaVista spent hundreds of millions on acquisitions that never worked, and spent $100M on a brand advertising campaign. They spent NOTHING to improve core search. That was the undoing of AltaVista.
AltaVista was the best, fastest search out there. Then they stopped focusing on search. Their index wasn't updated frequently. Their search slowed down. Quality dropped. They created an opportunity for someone who would focus on search.

Some question whether Google is going down the same path, losing its focus on core search because of the shiny distractions of free e-mail, social networks, instant messaging, and other portal-like goodies. Don seems concerned as well, ending with a warning to Google:
The last chapter has not been written in the search game. Microsoft MSN Search is every bit as good as Google in terms of size, speed, and relevance. Microsoft has come from behind several times before. I wouldn't bet against them now. MSN Search is just getting started.
Like AltaVista, I think MSN created the opportunity for Google by neglecting search in MSN and on the desktop for so many years. When you're the default search in the default browser in the default operating system, you have to be pretty bad to push so many people to go through the effort of switching.

But the difference is that MSN can recover from this mistake. With the vast resources of the Microsoft juggernaut behind them, they can rebuild MSN Search. Over the last year, they have been doing so, building their own search engine, getting the quality up, and, just as important, focusing on user perception of the quality.

And user perception is important. While I don't agree with Don that MSN Search is as good as Google, I don't think it has to be. It only have to be good enough. It only has to be good enough that people stop bothering to switch their defaults because, as they see it, it doesn't really matter anymore which engine they use, so they might as well use the one that's already set up.

That's the challenge in front of Google. Not only do they have to be better than MSN Search, they have to be much better than MSN Search, so much better that people keep switching. If they trip, if they slow, if they get distracted by shiny things, MSN will catch up, and the game will be over.

Monday, October 17, 2005

Findory interviewed on eHub

I have an interview about Findory posted on eHub.

Verisign acquires Moreover

Verisign just bought Moreover, the news aggregator.

The Moreover news database is used by Yahoo News, My Yahoo, MSN, MSN Newsbot, Ask Jeeves News, and many other news sites, though not by Google News or Findory.

Congratulations, Jim!

Update: In the comments, Scott Gatz from Yahoo says that Yahoo is no longer using Moreover and that the Yahoo News database is now homegrown.

Inform and personalized news

Bob Tedeschi at the New York Times writes about a new personalized news startup called Inform.

According to the article, Inform hopes to build "the ultimate newspaper of the future" using personalization.
[Inform] will not only let you find articles on the topic of your choice from hundreds of newspapers and magazines, it will also alert you to all the other news accounts floating around cyberspace that have any connection whatsoever to anything you read.

As a user reads a WashingtonPost.com article about Sandra Day O'Connor, for example, Inform offers a short list of related stories about the justice and other people, places, organizations, topics, industries and products mentioned in the text.

When users register with the site, Inform will also watch what they read and make suggestions on their home pages based on past sessions.
I wasn't actually able to get the personalization to work when I tried it. I registered, read a few articles, and went back to the home page. The home page hadn't changed. Please let me know if you do find a way to get the personalization to appear. I'd like to see it.

As their press release describes, the Inform personalization technology is based on text analysis.
Inform's proprietary technology collects content from thousands of sources and analyzes the entire text using an algorithmic processing engine. Through this process, Inform systematically tags and scores each component of the article, identifying every topic, industry, organization, person, place and product mentioned throughout the entire article.
The problem with this technique is that the recommendations are often obvious and uninteresting. When you read an article on Iraq, you'll tend to be recommended other articles on Iraq. Using information about what users find, like Findory does, allows more interesting recommendations, like noticing that people who read stories on Iraq also read stories about North Korea.

Personalized news seems to be heating up lately. I've heard of 3-4 startups in the last couple months that already have released or soon will be releasing products for personalized news. Exciting!

Update: A blisteringly negative review of Inform from Rafat Ali at PaidContent.org. Ouchie.

Update: Apparently, Inform is not a small startup. The company employs 55 people. Findory, in comparison, has 2.

Sunday, October 16, 2005

Attention and life hacking

Clive Thompson at the New York Times has a long article, "Meet the Life Hackers", about attention and information overload.

I particularly enjoyed the interview with Eric Horvitz from Microsoft Research on his work on attention.
Eric Horvitz ... has been building networks equipped with artificial intelligence (A.I.) that carefully observes a computer user's behavior and then tries to predict that sweet spot - the moment when the user will be mentally free and ready to be interrupted.

Horvitz booted the system up to show me how it works. He pointed to a series of bubbles on his screen, each representing one way the machine observes Horvitz's behavior. For example, it measures how long he's been typing or reading e-mail messages; it notices how long he spends in one program before shifting to another ... The A.I. program will ... [also] eavesdrop on him with a microphone and ... a Webcam, to try and determine how busy he is, and whether he has company in his office.

In the early days of training Horvitz's A.I., you must clarify when you're most and least interruptible, so the machine can begin to pick up your personal patterns. But after a few days, the fun begins - because the machine takes over and, using what you've taught it, tries to predict your future behavior.

Horvitz clicked an onscreen icon for "Paul," an employee working on a laptop in a meeting room down the hall ... Paul, the A.I. program reported, was currently in between tasks - but it predicted that he would begin checking his e-mail within five minutes. Thus, Horvitz explained, right now would be a great time to e-mail him; you'd be likely to get a quick reply. If you wanted to pay him a visit, the program also predicted that - based on his previous patterns - Paul would be back in his office in 30 minutes.

[Another] program ... code-named Priorities, analyzes the content of your incoming e-mail messages and ranks them based on the urgency of the message and your relationship with the sender, then weighs that against how busy you are. Superurgent mail is delivered right away; everything else waits in a queue until you're no longer busy.

Perhaps if we gave artificial brains more control over our schedules, interruptions would actually decline - because A.I. doesn't panic. We humans are Pavlovian; even though we know we're just pumping ourselves full of stress, we can't help frantically checking our e-mail the instant the bell goes ding. But a machine can resist that temptation, because it thinks in statistics. It knows that only an extremely rare message is so important that we must read it right now.
I love it. These kinds of tools don't hide information from you; they only prioritize information, giving you a way to quickly focus. It's designed to help you pay attention to what matters right now and avoid frivolous interruptions.

If you want more details, Eric has several published papers on his work including "Models of Attention in Computing and Communication" and "Attention-Sensitive Alerting".

Friday, October 14, 2005

The Google feed reader and relevance

Google announced their new feed reader at the Web 2.0 conference.

Sadly, reaction to the new service have been negative, at least in part due to Google's failure to build the service to perform quickly with the sudden influx of demand.

What I find most interesting about the service is that the unusual default view in the reader is to show recent posts from all of your feeds ordered by "relevance". According to the Google Reader FAQ, this ordering of the articles "prioritizes the items that seem most relevant to you."

More details are lacking. Jeff Clavier seems to have additional information and says, "The relevance is based on an analysis of your blogroll, and surfaces posts that relate to your areas of interest (this is reportedly work in progress)."

Very interesting and surprisingly similar to what Findory recently launched. Findory's feed reader is unusual in that the default view shows top stories from your feeds ordered by recency and relevance.

Findory's definition of "relevance" is based on what you have read in the past. How does Google's feed reader determines relevance?

Offhand, I would guess that it is simply the same relevance used for Google's blog search. That is, it favors recent articles from authoritative and reputable weblogs. I don't think individual reading behavior influences relevance yet, though I wouldn't be surprised if they later decided to favor feeds you read recently.

I very much doubt that Google uses the detailed data about what articles you have read for relevance like Findory does, but perhaps Google will move in that direction someday.

For startups, Google is the new Microsoft?

Dare Obasanjo posts some parting thoughts on the Web 2.0 conference, including that is seems that:
Google has replaced Microsoft as the company that Silicon Valley companies love to hate because it enters nascent markets and dominates them.
We at Findory have experienced this firsthand. I have been surprised to hear VCs tell us that they wouldn't fund Findory because it eventually might compete against Google.

But I think Dare overstates things by casting Google as the new bogeyman. The rapid rate of Web 2.0 innovation, some of which is coming from the search giants, creates uncertainty and risk. Some fear that uncertainty, some embrace it.

See also my previous posts, "Yahoo and being underfoot" and "Sucking up the talent".

Yahoo blog search launches

Yahoo launched blog search, integrating it into their news search as a sidebar.

More thoughts and analysis from John Battelle and Nathan Weinberg.

Update: Charlene Li posts some interesting thoughts on Yahoo Blog Search. Great point on the value of blogs for breaking news.

How many feeds matter?

Jim Lanzone at Ask Jeeves posts that, according to Bloglines' data, only 37k feeds "really matter."

The details of the data are interesting. Only 1.3M feeds have even one subscriber. 37k feeds have at least 20 subscribers. 20 subscribers might be a somewhat high bar, but it does separate out feeds that are useful to some people from feeds that are useful to very few or none. The data does not distinguish feeds for weblogs from feeds for news sources like Wired or BBC. The data does not include the few popular weblogs that do not publish a feed.

It's a remarkable contrast to Technorati's claim (on their home page) that there are 19.4M weblogs out there. I and others have argued that the 19.4M weblogs estimate is utterly absurd, inflated by millions of spam and bogus weblogs.

At Findory, our experience has been that 95% or more of weblogs are fake. I suspect the number of real, interesting, and useful weblogs to be well under 100k. The new data from Bloglines support this lower estimate.

Thanks, Jim, for providing this data. Very interesting.

[via Gary Price]

Update: Mark Cuban complains about a massive increase in spam blogs.

Update: Dave Sifry at Technorati repeats his claim that only "2% - 8% of new weblogs are fake or spam weblogs." See also Dave's thoughts in the comments for this post.

Update: Rich Skrenta at Topix.net says, "What we're seeing is that 85-90% of the daily posts hitting ping services such as weblogs.com are spam. Of well-ranked non-spam blogs that we've discovered, we've found about half haven't been updated in the past 60 days."

Can Web 2.0 mashups be startups?

Dare Obasanjo from MSN posted some interesting comments from a panel at Web 2.0 that included Paul Rademacher, the author of the cool Housing Maps mashup that combines housing listings from Craigslist with Google Maps.

When Paul was asked why he didn't try to make Housing Maps a startup, he apparently gave two reasons he decided not to:
  1. He did not own the data that was powering the application.
  2. The barrier to entry for such an application was low since there was no unique intellectual property or user interface design to his application.
It's a great point. Mashups are simple combinations of other people's web services. They're often done in a flash of inspiration in a weekend of hard coding.

The ideas are often clever and useful but, once someone has demonstrated a mashup idea, there is little to stop others from duplicating it. If there is no novel technology and no proprietary data, there are no barriers to entry. The software is merely an intermediary connecting two other pieces of software. There is minimal value add; most of the value lies in the underlying services.

Mashups by themselves cannot be a startup. While startups can leverage existing web services to avoid recreating the wheel, there has to be something else, some additional data, some additional software, some novel technology, in the service for it to be a business.

Update: Five months later, alarm:clock reports that VCs are "dissing mashups .... because they are not readily defensible."

MySQL, InnoDB, and Oracle

Tim O'Reilly and Jeremy Zawodny comment on the recent Oracle acquisition of Innobase Oy and the implication for MySQL.

Innobase Oy produces the transactional InnoDB storage engine used by MySQL. While the default in MySQL is super-fast MyISAM engine, that engine does not support transactions and does not directly compete with Oracle.

Tim, Jeremy, and many others are concerned about this move by Oracle. I do think this is a big issue for MySQL.

Fortunately for developers, there are other open source transactional database products available including the widely used Postgres.

Thursday, October 13, 2005

Findory in Spiegel

Thomas Hillenbrand at the German publication Spiegel writes about personalized news.

Google Desktop Search and Findory are the main examples. Findory seems to come out well in the comparison.

If your German is as bad as mine, you can get a very rough gist of it from a translated version.

Monday, October 03, 2005

Missing the Web 2.0 conference

Unfortunately, I will not be able to attend the Web 2.0 conference.

I wish I could be there -- it looks fantastic -- but I'll be off on my 10th anniversary trip, which also should be a lot of fun.

The conference sounds great, but I do find one thing a bit odd. Given that Web 2.0 is part of the conference name, it is strange that there has been such difficulty coming up with a clean definition of Web 2.0.

While much shorter than his first attempt, Tim O'Reilly's compact definition is still awkward and verbose:
Web 2.0 is the network as platform, spanning all connected devices; Web 2.0 applications are those that make the most of the intrinsic advantages of that platform: delivering software as a continually-updated service that gets better the more people use it, consuming and remixing data from multiple sources, including individual users, while providing their own data and services in a form that allows remixing by others, creating network effects through an "architecture of participation," and going beyond the page metaphor of Web 1.0 to deliver rich user experiences.
There have been other definitions, focusing on broadband, RSS, small companies, and other things. Seems to me that all of these attempts are trying too hard.

I think Web 2.0 is this period of rapid innovation and experimentation. We are learning how to use the connectivity and community of the Web to help people get the information they need.

I don't think it requires using a particular UI technology like AJAX. I don't think it requires using a particular data format like RSS. I don't think it requires broadband. I don't think it requires mobile devices.

And, I don't think it requires being small. In fact, the best definition I've seen of Web 2.0 comes from one of the search giants, Google. Google wants to organize the world's information to make it helpful and useful.

They, the other search giants, and many tiny startups are all furiously innovating, trying to learn how to help people get the information they need. I think that exploration, that learning, that innovation, that is Web 2.0.

Again, I wish I could be there. Enjoy the conference! I look forward to talking to people about it afterward.

Update: See also my later post, "Joel on Web 2.0".

Update: I like this definition of Web 2.0 by Neil Gunton on Slashdot: "The whole Web 2.0 thing is just an attempt by someone to sum up the resurgence of the internet post-dot-bust of 2000."

Search as a dialog and personalized search

John Battelle posted an interesting quote from Gary Flake (formerly of Yahoo Research, now at Microsoft) on how people search:
[Gary] wished searchers were more ... sure of what they wanted, and willing to engage in a dialog with the search engine. Most, it turns out, are not.
If there is going to be a dialog, it needs to be something easy, something that helps searchers without requiring any effort on their part.

Current search engines treat each search is independent, ignoring the valuable information about what you just did, what you just found or failed to find. Paying attention to that history should allow search to become more relevant, more useful, and more helpful, all with no effort from searchers.

In his new book, John has another quote from Gary Flake about using the wisdom of the crowds to improve search results:
"You can learn a lot by watching the statistical patterns of search usage and leveraging that in algorithms," notes Gary Flake ... "We use a very large corpora (body of data) to identify sets of tactical and grammatical properties of language."

The result: search has the potential to get better and better, the more people use it.
While Gary is talking about the kind of techniques used for automated spelling correction, machine translation, and question answering, the techniques can also be used to learn what individual people think is and is not relevant, to pay attention to the history of what people have done, to do personalized search.

Interview on Technosight

Ken Yarmosh at Technosight posted an interview with me about Findory.