According to Eric K. Clemons, a professor of operations and systems management at the Wharton School of the University of Pennsylvania, online ranking systems suffer from a number of inherent biases. The first is deceptively obvious: people who rate purchases have already made the purchase. Therefore, they are disposed to like the product. “I happen to love Larry Niven novels,” Clemons says. “So whenever Larry Niven has a novel out, I buy it. Other fans do, too, and so the initial reviews are very high—five stars.” The high ratings draw people who would never have considered a science-fiction novel. And if they hate it, their spite could lead to an overcorrection, with a spate of one-star ratings.
Such negativity exposes another, more pernicious bias: people tend not to review things they find merely satisfactory. They evangelize what they love and trash things they hate. These feelings lead to a lot of one- and five-star reviews of the same product.
A controlled offline survey of some of these supposedly polarizing products revealed that individuals’ true opinions fit a bell-shaped curve—ratings cluster around three or four, with fewer scores of two and almost no ones and fives. Self-selected online voting creates an artificial judgment gap; as in modern politics, only the loudest voices at the furthest ends of the spectrum seem to get heard.
The existence of super-reviewers has one unassailable advantage, though: they are rarely shills. The deliberate manipulation of review sites by people directly involved with a product—the author of the book, say—is one of the oldest and most difficult problems for online-rating communities to solve.
Some sites attempt to remove suspect posts using automated filters that search for extremely positive or negative language, especially when the review comes from someone with a short résumé. But this lack of transparency can breed mistrust—or worse.
Consider the case of the local-business review site Yelp, which filters out suspect reviews. Its CEO and co-founder Jeremy Stoppelman defends the practice by pointing to classified advertisements placed by business owners offering payment for positive reviews. Yet some businesses suspect more sinister forces at work. Earlier this year a coalition of local business owners sued Yelp, accusing the company of running what amounted to a digital extortion racket. The lawsuit claims that sales representatives from Yelp would call businesses and make a simple offer: advertise with us, and we’ll make negative reviews disappear.
The company vigorously denies the allegations and claims that any cuts are automated and coincidental. Still, Yelp has refused to divulge how its filters operate, lest unscrupulous users employ that information to game the system. This lack of transparency has led to the perception that the company itself might be manipulating the playing field.
The system is not beyond repair, however. Clemons points to RateBeer.com, which has attracted some 3,000 members who have rated at least 100 beers each; all but the most obscure beers have been evaluated hundreds or thousands of times. The voluminous data set is virtually manipulation-proof, and the site’s passionate users tend to post on all beers they try—not just ones they love or hate.
Of course, reviewing 1,000 beers is easier (and cheaper) than rating the same number of restaurants or hotel rooms. Until other sites amass the same amount of quality data, an old truism could be consumers’ best advice: buyer beware.