Tuesday, May 29, 2007

Peter Fleischer on Google personalization

Googler Peter Fleischer recently wrote an article for the Financial Times on Google's personalization.

Much of the article is on privacy issues -- Peter is Google's "Global Privacy Counsel" -- but I wanted to highlight his thoughts on the benefits of personalization and the future of search:
Our search algorithm is pretty sophisticated and most people end up with what they want. But there is inevitably an element of guesswork involved.

An algorithm ... built to take into account an individual's preferences ... has much more chance of guessing what that person is looking for. Personalised search uses previous queries to give more weight to what each user finds relevant to them in its rankings.

If you think of search as a 300 chapter book, we are probably still only on chapter three. There are enormous advances to be made. In the future users will have a much greater choice of service with better, more targeted results. For example, a search engine should be able to recommend books or news articles that are particularly relevant - or jobs that an individual user would be especially well suited to.
When a searcher only enters a few keywords, any additional information could help. By looking back at what the searcher has done before, search engines can better determine intent and interest. By using search and web history, search engines can help get people the information they need faster and with less effort.

A minor point, but I do want to quibble with Peter's final example where he said "in the future ... a search engine should be able to recommend books or news articles." If Google wants to build a search engine that can recommend books or news articles, it need not write its own chapters in the book of search. It need only to look to the chapters already written by others.

[FT article found via the Official Google Blog and Jeremy Pickens]

Update: If you liked this post, the post from last week, "Personalization the most important part of Google's expansion", with quotes on personalization from a Google CEO Eric Schmidt, also may be of interest.

Tribler adds recommendations to BitTorrent

Janko Roettgers at NewTeeVee writes about the Tribler research project which recommends torrents based on your previous downloads:
You downloaded the same movie as two other people? There’s a good chance that you’ll also like other downloads these folks have in common. The longer a client is connected, the more files and users it is able to discover, and your recommendations get better with every download as well.
From the Tribler FAQ:
Recommendations are made using a collaborative filtering algorithm.

This algorithm will compare your download history to that of the peers you meet. If the peer has torrents in its download history that you have not downloaded it will recommend them to you.

The recommendation value assigned to the torrent and shown in the Recommendation window depends on how similar the peer's download history is to yours. So the higher the value, the more you are predicted to like it.
There are more details on the recommender algorithm on the Decentralized Recommendation page and in two papers, "Tribler: A social-based peer-to-peer system" (PDF) and "Distributed Collaborative Filtering for Peer-to-Peer File Sharing Systems" (PDF).

See also an old Jan 2005 Wired article by Clive Thompson where he wrote:
What exactly would a next-generation broadcaster look like? ... The network of the future will resemble Yahoo! or Amazon.com - an aggregator that finds shows, distributes them in P2P video torrents, and sells ads or subscriptions to its portal.

The real value of the so-called BitTorrent broadcaster would be in highlighting the good stuff, much as the collaborative filtering of Amazon and TiVo helps people pick good material.

Eric Garland, CEO of the P2P analysis firm BigChampagne, says, "the real work isn't acquisition. It's good, reliable filtering. We'll have more video than we'll know what to do with. A next-gen broadcaster will say, 'Look, there are 2,500 shows out there, but here are the few that you're really going to like.' We'll be willing to pay someone to hold back the tide."

MSN Search keeps on dropping

John Battelle reports that Microsoft continues to lose ground against Google:
According to comScore ... in April 07 .... MSN posted 60bps share loss facing Google's strong share gain and came in at 10.3%.
See also my Nov 2006 post, "Google dominates, MSN Search sinks".

reCaptcha and human computation

reCaptcha is a cute idea, trying to turn all the "prove you are a human" tests on the Web into useful work.

From their "What is reCaptcha?" page:
About 60 million CAPTCHAs are solved by humans around the world every day. In each case, roughly ten seconds of human time are being spent. Individually, that's not a lot of time, but in aggregate these little puzzles consume more than 150,000 hours of work each day.

What if we could make positive use of this human effort? reCAPTCHA does exactly that by channeling the effort spent solving CAPTCHAs online into "reading" books.

reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher ... Each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA.
Very clever and very fun.

For more on Luis Von Ahn's work, see the discussion and links to talks and papers in my earlier post, "Human computation and playing games".

[reCaptcha found via O'Reilly Radar]

Thursday, May 24, 2007

Personalization the most important part of Google's expansion

Caroline Daniel and Maija Palmer at the Financial Times quote Google CEO Eric Schmidt talking about personalization:
Eric Schmidt ... said gathering more personal data was a key way for Google to expand and the company believes that is the logical extension of its stated mission to organise the world's information.

Asked how Google might look in five years' time, Mr Schmidt said: "We are very early in the total information we have within Google. The algorithms will get better and we will get better at personalisation."

"The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'"

The race to accumulate the most comprehensive database of individual information has become the new battleground for search engines as it will allow the industry to offer far more personalised advertisements. These are the holy grail for the search industry, as such advertising would command higher rates.

Schmidt ... [said]: "We cannot even answer the most basic questions because we don't know enough about you. That is the most important aspect of Google's expansion."

Google personalised search ... [uses] what [searchers] have searched and clicked on ... to create more personalised search results for them.

Another service under development is Google Recommendations – where the search suggests products and services the user might like, based on their already established preferences.
Some of Eric's words may have been poorly chosen -- "total information" and "we don't know enough about you" are phrases that play into fears of an intrusive Google overlord -- but I think Eric's goal really is noble, to use personalization to help people find the information they need.

By learning from people's past behavior, Google can disambiguate their intent. Different people will see different information based on their needs and preferences. Like the good friend that sends you links that might interest you, Google will adapt to you and help you find what you need.

Even for advertising, personalization can be helpful, targeted efforts to help you find products and services you might actually like rather than spamming mass audiences with annoying and irrelevant crap. As Eric said back in Oct 2005, "Advertising should be interesting, relevant and useful to users."

See also my Nov 2005 post, "Is personalized advertising evil?".

For more on personalized search, some of my previous posts, especially "Personalization is hard. So what?", " Personalization, intent, and interest", and "Personalized search yields dramatically better search results", may be of interest.

Tuesday, May 22, 2007

Universal search, Google, and A9

David Bailey (ex-Amazon, now at Google) and Johanna Wright discuss Google's new universal search effort -- bringing image, video, news, and other vertical search results into the web search results -- on the Official Google Blog.

Gary Price makes the point that others have walked down this road before, at least a bit down it, but Google's universal search does sound to me like a broader effort, an attempt to surface data from verticals on many more queries and perhaps even eliminate the need to go separately to the verticals.

From David and Johanna's post:
Finding the best answer across multiple content types is a well-known hard problem in the search field.

Until now, we've only been able to show news, books, local and other such results at the top of the page ... [but] often we end up not showing these kinds of results even when they might be useful.

If only we could smartly place such results elsewhere on the page when they don't quite deserve the top, we could share the benefits of ... [results from verticals] much more often.

Although it's just a beginning ... now you'll be able to get more information Google knows about directly from within the search results. You won't have to know about specialized areas of content.
One thing I find interesting about this is comparing this universal search effort at Google with the federated search done by A9.

A9 allowed searching many databases, but searchers had to manually select which data sources to use, and results were not merged. A9 punted on the hard problems with federated search, query routing and relevance rank of the merged results.

Now, Google appears to be taking some of these problems on, trying to determine which of their verticals should be searched given likely user intent with a query and trying to determine which results are mostly likely to be relevant from the disparate data sources.

Of course, Google's universal search is not federated search. Google prefers their own copy of data and is unlikely to hit external sources. But, the problems of query routing and relevance rank of merged results share much in common with the problems that need to be solved for Google's universal search.

It is curious that Udi Manber, former CEO of A9, is now leading Google core search. Perhaps he is finishing the work at Google that he never finished at A9.

Killing AdSense

Nick Carr argues that "Google ... is vulnerable to a pricing attack on its AdSense service" and that Yahoo or Microsoft should "introduce a free version of AdSense":
Immediately, you put a lot of pricing pressure on an important source of revenues and profits for Google.

Turning the delivery of contextual ads into a free service for publishers would put Google under financial pressure ... but it wouldn't cause any harm, of a material nature, to Microsoft.

Turning the AdSense market into a free market would help neutralize that edge and generally redefine the competitive dynamic to Microsoft's benefit.
Rather than trying to play the advertising game, Microsoft could seek to end the game. It could seek to eliminate the high margins advertising brokers, like Google, currently enjoy.

No surprise that I agree with Nick on this one given that I have made similar arguments in the past. Back in Dec 2005, I wrote "Kill Google, Vol. 2", where I said:
AdSense revenues -- revenues from ads placed on other sites -- may be particularly vulnerable to attack. This was 43% of Google's revenue in Q3 2005. With these ads, the owner of the site gets roughly 70% of the revenue from the ad. Google takes the other 30%.

It seems like Microsoft could do a fair amount of damage here by trying to drive the share the advertising engine takes in this deal to near zero. To do that, it just needs to launch its own AdSense-like product and be willing to set its take to its breakeven point.
A few months later, I wrote "Kill Google, Vol. 3", where I said:
If I want to beat Google? I would throw everything I have got at an AdSense killer.

AdSense is now about half of Google's revenue and their future growth. Microsoft should strangle Google's air supply, their revenue stream.

Microsoft should use its cash reserves to make being an advertising provider unprofitable for others, lowering ad broker revenue share to near 0%.

Microsoft should ruin AdSense, undermine it, destroy it. There should be no business in AdSense-like products for anyone.

If Microsoft wants to win, it should play to its strengths. It should not seek to change the game. It should seek to end the game.
Of course, since Microsoft just spent $6B to try to compete in the advertising market, killing the market may be less attractive that it was in the past. Too bad. It might have been a good way to save $6B.

Universal search and personalization

Gord Hotchkiss posts some good thoughts on how personalization will be part of Google's universal search:
Personalized search is the engine [that] is going to drive universal search.

When you look at the wording the Google throws around about the on-the-fly ranking of content from all the sources for Universal Search, that's exactly the same the wording they use for the personalization algorithm.

[Google's personalized search] operates on-the-fly, looks at the content in the Google index and re-ranks it according to be perceived intent of the user, based on search history, Web history and other signals. It's not a huge stretch to extend that same real-time categorization of content across all of Google's information silos.

As Google gains more confidence in disambiguating user intent, more specific types of search results, extending beyond Web results, will get included on the results page and presented to the user.
Gord makes a good point. When talking about universal search, Marissa Mayer motivated it by saying, "The best answer is still the best answer."

But, the best answer for me is not necessarily the best answer for you. Especially in cases where there is ambiguity in intent and multiple possible verticals that might be relevant, the deciding factor could be what I have done in the past, my search and web history.

If Google is not personalizing universal search already, it likely is only a matter of time before they start.

A $6B admission of failure

Of all the commentary on Microsoft's $6B acquisition of aQuantive, I agree most with Mini-Microsoft:
Six billion dollars. $6,000,000,000 USD. Holy crap.

This is a huge demonstration of fear, desperation, and dim-dog market tail-light chasing greed on [Microsoft's] part. Every acquisition represents our failure to use our 70,000+ employee base to solve a solution or create a new market.

Seeing how much aQuantive is aligned with us and our technology stack makes you on the surface say, "hmm, yes, excellent strategic acquisition." Then you see the price tag.
It is an expensive move that reeks of desperation. For four years, Microsoft was unable to build what they now have to buy. This acquisition is a costly admission of failure.

See also my Nov 2006 post, "Google dominates, MSN Search sinks", where I said, "It really is remarkable how badly Microsoft is doing against Google." Those same words come to mind now.

Saturday, May 12, 2007

Future of Search event at Berkeley

Matthew Hurst posts a summary of the Future of Search event that was hosted at UC Berkeley yesterday on May 4.

Looks like it was a great event. Speakers included Peter Norvig, Oren Etzioni, Marti Hearst, Andrei Broder, and Eric Brill. Unfortunately, neither video nor slides from the talks appear to be available.

Mike Love also attended and also has a short writeup.

Thursday, May 10, 2007

Google News Personalization paper

I probably started drooling when I first noticed this paper, "Google News Personalization: Scalable Online Collaborative Filtering" (PDF), by four Googlers that is being presented at the WWW 2007 Conference this weekend.

The paper does not disappoint. It is an awesome example of what can be done at Google scale with their data, traffic, and massive computational cluster.

The paper tested three methods of making news recommendations on the Google News front page. From the abstract:
We describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts.
MinHash and PLSI are both clustering methods; a user is matched to a cluster of similar users, then they look at the aggregate behavior of users in that cluster to find recommendations. Covisitation is an item-based method that computes which articles people tend to look at if they looked at a given article (i.e. "Customers who visited X also visited...").

The paper does a nice job motivating the use of recommendations for news:
The Internet has no dearth of content. The challenge is in finding the right content for yourself: something that will answer your current information needs or something that you would love to read, listen or watch.

Search engines help solve the former problem; particularly if you are looking for something specific that can be formulated as a keyword query.

However, in many cases, a user may not even know what to look for ... Users ... end up ... looking around ... with the attitude: Show me something interesting.

In such cases, we would like to present recommendations to a user based on her interests as demonstrated by her past activity on the relevant site .... [and] the click history of the community.
The authors then explain that what makes this problem so difficult is doing it at scale in real-time over rapidly changing data.
Google News (http://news.google.com) is visited by several million unique visitors ... [and] the number of ... news stories ... is also of the order of several million.

[On] Google News, the underlying item-set undergoes churn (insertions and deletions) every few minutes and at any given time the stories of interest are the ones that appeared in last couple of hours. Therefore any model older than a few hours may no longer be of interest.

The Google News website strives to maintain a strict response time requirement for any page views ... [Within] a few hundred milliseconds ... the recommendation engine ... [must] generate recommendations.
The authors note that, while Amazon.com may be doing recommendations at a similar scale and speed, the item churn rate of news is much faster than of Amazon's product catalog, making the problem more difficult and requiring different methods.

The Googlers did several evaluations of their system, but the key results are that two versions that combined all three of the approaches in different ways generated 38% more clickthroughs than just showing the most popular news articles. That seems surprisingly lower than our results for product recommendations at Amazon.com -- we found recommendations generated a couple orders of magnitude more sales than just showing top sellers -- but Google's results are still a good lift.

On the reason for the lower lift, I did end the paper with a couple questions about parts of their work.

First, it appears that the cluster membership is not updated in real-time. When discussing PLSI, the authors say they "update the counts associated with that story for all the clusters to which the user belongs" but that this is an approximation (since cluster membership cannot change in this step) and does not work for new users (who have no cluster memberships yet).

This is a typical problem with clustering approaches -- building the clusters usually is an expensive offline computation -- but it seems like a cause for concern if their goal is to offer "instant gratification" to users by changing when they click on new articles.

Second, it appears to me that the item-based covisitation technique will be biased toward popular items. In describing that algorithm, the authors say, "Given an item s, its near neighbors are effectively the set of items that have been covisited with it, weighted by the age discounted count of how often they were visited."

If that is right, this calculation would seem to suffer from what we used to call the "Harry Potter problem", so-called because everyone who buys any book, even books like Applied Cryptography, probably also has bought Harry Potter. Not compensating for that issue almost certainly would reduce the effectiveness of the recommendations, especially since the recommendations from the two clustering methods likely also would have a tendency toward popular items.

I have to say, having worked on this problem with Findory for the last four years, I feel like I know it well. Findory needs to generate recommendations for hundreds of thousands of users (not yet millions) in real-time. New articles enter and leave the system rapidly; newer stories tend to be the stories of interest. The recommendations must change immediately when someone reads something new.

This work at Google certainly is impressive. I would have loved to have access to the kind of resources -- the cluster, MapReduce, BigTable, the news crawl, the traffic and user behavior data, and hordes of smart people everywhere around -- that these Googlers had. Good fun, and an excellent example of the power of Google scale for trying to solve these kinds of problems.

Powerset CEO talk at UW CS

Barney Pell, the CEO of Powerset, gave a talk at UW CS on natural language search. Video of the talk now is available.

I attended the talk. I was hoping for details on Powerset's technology and a live demo, but, unfortunately, the talk was much higher level than that. It mostly covered motivation for natural language search and why the market timing was right. I have to say, it had the feel of an investor pitch.

The most compelling part of the talk for me was when Barney was talking about the value of NLP for extracting additional information from a small data set. For example, Barney compared the performance of Powerset's alpha product running over Wikipedia with Google limited to searching over Wikipedia on several questions (e.g. "Who did IBM acquire in 2003?" and "When did Katrina strike Biloxi?").

On the one hand, these examples might not be fair to Google, since Google gains its power from its massive index; Google is crippled by not allowing it to reach far and wide to answer questions. On the other hand, there are many applications where the all that is available is a small data set (e.g. newspapers, health, product catalogs), and there is considerable value in those problems of maximizing your understanding of that data.

The least compelling part for me was the hyping of the technology Powerset licensed from Xerox PARC, especially when Barney appeared to suggest that this technology means NLP is largely a solved problem:
The fundamental problems we were really worried about -- you know, problems like how do you deal with ambiguity, how do you deal with open vocabulary, how can you be robust in the face of noise and erroneous things, how can you be applied to multiple languages and these kind of things, how can you be computationally efficient at all -- took a really long time and, while they are not just all completely done, the fundamental challenges that they had seen for all that time were basically resolved.
It would be nice if the fundamental challenges in NLP were basically resolved, but I do not believe that is the case.

I do agree with the motivation behind Powerset. Especially for verticals, better understanding of smaller data sets would be useful.

I also agree that bloating indexes with data summarizing NLP extractions is a promising approach, despite the x100 longer index build times and x10 increase in index sizes that Barney said may be required. Computers are more powerful and massive clusters are becoming cheaper to acquire. The computational power to do these tasks is at hand.

I am not sure I agree with Barney when he said a linguistics approach to NLP is more likely to bear fruit than a statistical approach. More thoughts on that in my previous post, "Better understanding through big data".

I also have to say I was confused at several points in Barney's talk about whether Powerset was seeking better question answering or trying to do something bigger. Some of his examples seemed like they would not only require understanding query intent and the information on a single web page, but also might require understanding, synthesizing, and combining noisy and possibly conflicting data from multiple sources. The latter is a much harder problem, but Barney seemed to be suggesting that Powerset was taking it on.

In the end, the talk did not address my concern that Powerset is overpromising in the press and is likely to underdeliver. What I would really like to do is play with a live Powerset demo, perhaps Powerset powering Wikipedia search or the search for a major newspaper, and see more details behind the technology. For now, I remain worried that the pitch is running far ahead of the product.

Update: Six months later, Powerset has a management shakeup, losing its COO and having its CEO, Barney Pell, step down to CTO due to a "slip in the company's delivery date of its product."

New recommendations in Yahoo Travel

Yahoo Travel launched a few nifty new personalization and recommendation features.

The site now tracks your search and viewing history and makes prominent recommendations for flights, trips, and hotels on the front page. In my case, it appeared to focus on one item in my history, viewing information about a city in Maui, and made several recommendations of other things in Hawaii.

The feature is a nice use of implicit personalization, making the Yahoo Travel site more useful and feel more friendly without requiring any work.

In my usage, the recommendations seemed a little too commercial, more like advertisements than helping me discover useful information, and a little too broad, showing recommendations for trips to New York and Los Angeles rather than focusing on my interests, but that might be a nit pick.

Greg Sterling at Search Engine Land posted a review of the new features that has some good details.

Bill Gates on IPTV

Todd Bishop at the Seattle PI reports on a speech by Bill Gates at Microsoft's Strategic Account Summit online advertising conference.

I thought these excerpts on the future of TV were particularly interesting:
TV [is changing] from being a simply broadcast medium to being a targeted medium .... Having every household in America watching a different video feed has become practical.

It's a dramatic change in TV. ... Broadcast infrastructure over these next five years will not be viewed as competitive. The end-user experience and the creativity, the new content that will emerge using the capabilities of this environment will be so much dramatically better that broadcast TV will not be competitive.

And in this environment, the ads will be targeted, not just targeted to the neighborhood level, but targeted to the viewer. ... We'll actually not just know the household that that viewing is taking place in, we'll actually know who the viewers of that show are ... It's a very rich environment.
See also my August 2005 post, "Personalization for TV", that discusses some of Mark Cuban's and Chris Anderson's thoughts on IPTV and the need for discovery and personalization.

See also my Dec 2004 post, "BitTorrent, Internet TV, and personalization", exploring a Wired article on IPTV by Clive Thompson.

See also my Jan 2006 post, "A Google personalized TV ad engine?", that looks at thoughts from Robert Cringely about how TV advertising could be less annoying if it was more targeted.

Future directions for web search

In his article, "Top 17 Search Innovations outside of Google", Nitin Karandikar has a good summary of the most promising directions for improving web search.

Included are brief discussions of natural language, visualizations, verticals, deep web, social search, question answering, query refinement, and personalization.

[Found via Don Dodge]

Dot Bomb 2.0?

Commenting on some of the sky high acquisition prices for Internet companies with little or no revenues, Don Dodge writes:
I grew up in Maine, and I am reminded of the negotiations between two farmers from Maine at the county fair.

One farmer was showing off his "blue ribbon" dog and proposing to sell it for $100,000. The other farmers were laughing hysterically at the idea of a $100K dog. Dogs don't produce income. How could a dog be worth $100K?

Then one farmer stepped up and offered to trade two of his $50,000 cats for the $100K dog. The dog owner quickly agreed and bragged to all his friends how he sold his dog for $100K.

Acquisitions that are done as stock swaps are obviously not the same as cash transactions. Public companies often use their stock as trading currency for acquisitions since it has no cash impact on their business.

However, stock transactions dilute the value of other shareholders, sometimes significantly. In a rising stock market no one really notices because the steady share price increase masks the dilution. When the stock market turns the problems are exposed...and magnified.

We have seen this before. It was the nuclear winter that lasted from 2000 to 2003. It is amazing how quickly we forget. As I always say "fear is temporary...greed is permanent".
Another round of absurd deals, hustling inflated stocks, and, eventually, a rush for the exits as everyone tries to avoid being the greater fool? Are we on our way to Dot Bomb 2.0?

MBAs and Google's erosion from within

Paul Kedrosky writes:
At the same as a number of engineering friends of mine are leaving Google -- most common complaint: too big and bureaucratic -- MBAs have declared it to be their employer of choice.
See also some of Paul's past posts ([1] [2] [3]) that speculate that MBA hiring may be a negative indicator for future performance.

See also my previous post, "Google and those TPS reports", that includes thoughts from Googler Chris Sacca and a brief tidbit about my experience with the influx of MBAs at Amazon.com.

Wednesday, May 09, 2007

Esther Dyson on the future of search

Laurie Petersen reports on a keynote by Esther Dyson at the Search Insider Summit.

Some excerpts:
Dyson said, "I don't see the quality of search improving very much. Search is like telling a dog, 'Go Fetch,' I want something to 'Go Fetch and Reserve' [as in the right hotel room.]"

What's needed, she said, is switching from a "search and fetch" mentality to a "deliver, act and transact" perspective based on personalization.

The real winner, Dyson said, will be a custom-built tool that understands the nuance of an individual, his or her phrasing, and specific likes and dislikes. This tool will incorporate both domain knowledge and user knowledge.
However, what Esther is suggesting may go well beyond search. It would seem to enter into the realm of software agents.

To "deliver, act, and transact", not only would the computer need to understand intent, but also it may have to come up with complicated plans to satisfy the request. It would have to have a rich understanding of information acquired, combine information from multiple sources, interact with external actors, and deal with uncertainty in information, actions, and intent.

At this point, you are basically talking about building a software robot, a softbot. Building the brains behind a software servant that can deliver, act, and transact is going to be about as hard as building the brains behind a hardware robot servant. Dropping motor skills does not ease the task of building higher brain functions.

A hard problem indeed. And one we are a long, long way from solving.

Wednesday, May 02, 2007

Explicit vs. implicit data for news personalization

A paper at the upcoming WWW 2007 conference, "Open User Profiles for Adaptive News Systems: Help or Harm?" (PDF), concludes that allowing users to edit profiles used for news personalization can result in worse personalization.

From the paper:
Despite our expectations, our study didn't confirm that the ability to view and edit user profiles of interest in a personalized news system is beneficial to the user. On the contrary, it demonstrated that this ability has to be used with caution.

Our data demonstrated that all objective performance parameters are lower on average for the experimental system. It includes system recommendation performance as well as precision and recall of information collected in the user reports.

Moreover, we found a negative correlation between the system performance for an individual user and the amount of user model changes done by this user. While the performance data vary between users and topics, the general trend is clear – the more changes are done, the larger harm is done to the system recommendation performance.

The results of our study confirmed the controversial results of Waern's study: the ability to change established user profiles typically harms system and user performance.
The paper was not clear on exactly why editing the profiles made the personalization worse, but I would look to what Jason Fry at the Wall Street Journal wrote several months back:
When it comes to describing us as customers and consumers, recommendation engines may do the job better than we would.

In other words, we lie -- and never more effectively than when we're lying to ourselves ... I fancy myself a reader of contemporary literature and history books, but I mostly buy "Star Wars" novels and "Curious George" books for my kid.
As I wrote after seeing Jason's article, "Implicit data like purchases may be noisy, but it also can be more accurate. You may say you want to watch academy award winners, but you really want to watch South Park. You may say you want to read Hemingway, but you really want to read User Friendly."

See also an April 2004 post where I said, "When you rely on people to tell you want their interests are, they (1) usually won't bother, (2) if they do bother, they often provide partial information or even lie, and (3) even if they bother, tell the truth, and provide complete information, they usually fail to update their information over time."

Tuesday, May 01, 2007

Management and total nonsense

I finished reading "Hard Facts, Dangerous Half-Truths, and Total Nonsense" a few weeks ago.

It is a book by Stanford Business School Professors Jeffrey Pfeffer and Robert Sutton arguing for evidence-based management.

Evidence-based management advocates making decisions based on the latest and best knowledge of what actually works. Yes, you might think this would not be controversial, but, sadly, it is in the world of business.

Early on, the authors state their position:
We believe that managers are seduced by far too many half-truths: ideas that are partly right but also partly wrong and that damage careers and companies over and over again.

Yet managers routinely ignore or reject solid evidence that these truisms are flawed.
Why do managers succumb to these half-truths? In part, it is that they get bad advice.
The advice managers get from the vast and ever-expanding supply of business books, articles, gurus, and consultants is remarkably inconsistent.

Consider the following clashing recommendations, drawn directly from popular business books: Hire a charismatic CEO; hire a modest CEO. Embrace complexity theory; strive for simplicity. Become ... strategy-focused; ... strategic planning ... is of little value.

Consultants and others who sell ideas and techniques are always rewarded for getting work, only sometimes rewarded for doing good work, and hardly ever rewarded for whether their advice actually enhances performance ... If a client company's problems are only partially solved, that leads to more work for the consulting firm.

The senior executive of a human resources consulting firm, for example, told us that because pay-for-performance programs almost never work that well, you usually get asked back again and again to repair the programs your clients bought from you.
But, it is not all just bad advice.
When the late Peter Drucker was asked why managers fall for bad advice and fail to use sound evidence, he didn't mince words: "Thinking is very hard work. And management fashions are a wonderful substitute for thinking."
There is a lot to like in the book, but I found this tidbit on building a culture that promotes learning to be particularly good:
A series of studies by Columbia University's Carol Dweck shows .... that when people believe they are born with natural and unchangeable smarts ... [they] learn less over time. They don't bother to keep learning new things.

People who believe that intelligence is malleable keep getting smarter and more skilled ... and are willing to do new things.

These findings also mean that if you believe that only 10 percent or 20 percent of your people can ever be top performers, and use forced rankings to communicate such expectations in your company, then only those anointed few will probably achieve superior performance.

[Managers should] treat talent as something almost everyone can earn, not that just a few people own.

Having people who know the limits of their knowledge, who ask for help when they need it, and are tenacious about teaching and helping colleagues is probably more important for making constant improvements in an organization.
The authors are no fans of forced rank or pay-for-performance, and they spend a fair amount of time explaining why:
A renowned (but declining) high-technology firm [used] a forced-ranking system ... [where] managers were required to rank 20 percent of employees as A players, 70 percent as Bs, and 10 percent as Cs ... They gave the lion's share of rewards to As, modest rewards to Bs, and fired the Cs.

But in an anonymous poll, the firm's top 100 or so executives were asked which company practices made it difficult to turn knowledge into action. The stacking system was voted the worst culprit.

A survey of more than 200 human resource professionals ... reported that forced ranking resulted in lower productivity, inequity and skepticism, negative effects on employee engagement, reduced collaboration, and damage to moral and mistrust in leadership.

People are more likely to ... see themselves more positively than others see them [and] believe they are above average or not recognize their lack of competence .... People who receive a smaller reward than they expect routinely resent the organization.

A 2004 survey ... of 350 companies showed that "83 percent of organizations believe their pay-for-performance programs are only somewhat successful or not successful at accomplishing their goals.
Similarly, an experiment at HP with "13 different pay programs" in the 1990s found that the "local managers who enthusiastically initiated these pay-for-performance programs ran into difficulties in implementation and maintenance" and soon wanted to abandon them. The experiments "did find that pay motivated performance" but that "the costs weren't worth the benefits" in terms of the "lost trust", "damaged employee commitment", "shift of focus away from the work and toward pay", "infighting about pay", and overhead of managing the programs.

Though HP learned the right lesson from their experiment at the time, this story does not have a happy ending. Years later, Carla Fiorina "forced the system throughout HP, disregarding the evidence gathered by the company itself." The authors' judgement is scathing: "CEO whim, belief, ego, and ideology, rather than evidence, direct what too many companies do and how they do it."

And on this note, Pfeffer and Sutton write that the "belief that leaders ought to be in control is a dangerous half-truth" because "leaders make mistakes -- all people do" and, with "few or no checks or balances", there are no way to correct the inevitable errors.

It's a good book, more grounded than most of the business fluff around these days, and worth reading if you are a manager or play one during the day.

Also, if you haven't read it already, I would strongly recommend Pfeffer's earlier book, "The Human Equation", which I briefly discussed in an old April 2004 post. The older book is tighter and more focused on management and HR practices than the newer book; the newer book is more of an assault on and lamentation of the state of management.

See also my earlier posts, "The problem with forced rank" and "Microsoft drops forced rank, increases perks".

Google on Clickbot.A

Googlers Neil Daswani and Michael Stoppelman wrote a paper, "The Anatomy of Clickbot.A" (PDF), on a clever click fraud attack against Google using a botnet.

From the paper:
This paper presents a detailed case study of the Clickbot.A botnet. The botnet consisted of over 100,000 machines ... [and] was built to conduct a low-noise click fraud attack against syndicated search engines.

Some computers [in the botnet] were infected by downloading a known trojan horse. The trojan horse disguised itself as a game .... It does not slow down a machine or adversely affect a machine’s performance too much. As such, users have no incentive to disinfect their machines of such a bot.

Several tens of thousands of IP addresses of machines infected with Clickbot.A were obtained. An analysis of the IP addresses revealed that they were globally distributed ... The IP addresses also exhibited strong correlation with email spam blacklists, implying that infected machines may have also been participating in email spam botnets as well.

Conducting a botnet-based click fraud attack directly against a top-tier search engine might generate noticable anomalies in click patterns, Clickbot.A attempted to avoid detection by employing a low-noise attack against syndicated search engines.
This is a pretty impressive attempt. Since the attacker disguises itself as a large number of independent users, this type of click fraud would be a challenge to detect.

I have to wonder whether there could be other, more clever attackers using similar methods that are slipping by undetected. For example, using a larger botnet would make the fraud more difficult to detect. It does appear that a several tens of thousands of IP addresses is not huge for a botnet; one was discovered back in 2005 that had 1.5M machines.

I also suspect an attacker could also make the fraud pattern more challenging to find by mimicking normal search and ad click patterns most of the time, especially on machines that are otherwise idle, which otherwise would stand out by doing nothing but fraudulent clicks.

As the paper says, this type of botnet-based click fraud is only likely to increase. Security researchers like Neil and Michael have their work cut out for them.

Slides on LiveJournal architecture

Brad Fitzpatrick from LiveJournal gave an April 2007 talk, "LiveJournal: Behind the Scenes", on scaling LiveJournal to their considerable traffic.

Slides (PDF) are available and have plenty of juicy details.

One thing I wondered after reviewing the slides -- and this is nothing but a crazy thought exercise on my part -- is whether they might be able to simplify the LiveJournal architecture (the arrows pointing everywhere shown on slide 4).

In particular, there seems to be a heavier emphasis on caching layers and read-only databases than I would expect. I tend to prefer aggressive partitioning, enough partitioning to get each database working set small enough to easily fit in memory.

I know little about LiveJournal's particular data characteristics, but I wonder if aggressive partitioning in the database layer might yield performance as high as a caching layer without the complexity of managing the cache consistency. Databases with data sets that fit in memory can be as fast as in-memory caching layers.

Likewise, I wonder if there would be benefit from dropping the read-only databases in favor of partitioned databases. With partitioned databases, the databases may be able to fit the data they do have entirely in memory; read-only replicas may still be hitting disk if the data is large.

Hitting disk is the ultimate performance killer. Developers often try to avoid database accesses because their database accesses hit disk and are dog slow. But, if you can make your database accesses not hit disk, then they can be blazingly fast, so fast that separate layers even might become unnecessary.

Again, wild speculation on my part. Everything I have seen indicates that Brad knows exactly what he is doing. Still, I cannot help myself from wondering, is there any way to get rid of all those layers?

[Slides found via Sergey Chernyshev]

Update: Another version of Brad's talk with some updates and changes.

Personalized search yields dramatically better search results

Greg Sterling at Search Engine Land, in his article "iGoogle, Personalized Search and You", quotes Google VP Marissa Mayer as saying:
[Personalization is] one of the biggest relevance advances in the past few years.

Personalization doesn't affect all results, but when it does it makes results dramatically better.
See also excerpts from a few recent articles by Search Engine Land columnist Gord Hotchkiss in my earlier post, "Personalization, Google, and discovery".