Overslaan naar belangrijkste inhoud

Article 6 min read

The psychology of rating: It's hard, but better, to be honest

Door Amanda Roosa

Laatst gewijzigd August 24, 2021

In 2014, China revealed a social credit system plan with the Chinese government, aiming to have it running by 2020. The intention behind a national reputation system is to manage the rewards or punishments of citizens based on their economic and social behavior. Penalties include flight bans, exclusion from private schools or prestigious work, internet connection, exclusion from hotels, and more.

The rating system brings to mind the Black Mirror episode, “Nosedive,” in which users rate online and in-person interactions on a five-star rating system that then affects virtually everything in that person’s life. But the Chinese government’s rationale, as reported by Wired in June 2019, was intentioned toward building trust: “…as trust-keeping is insufficiently rewarded, the costs of breaking trust tend to be low.” In theory, a person builds credit by being trustworthy and is rewarded, and loses credit—and perks or access to things—when they are not. China has both private and government social credit programs in place, and while some of the repercussions may seem extreme, the overall concept is not. Rating systems are a familiar and common practice in the U.S. and around the world, from marketplaces like Uber and Airbnb, to review platforms or review-driven sites like Yelp, Amazon, YouTube, and even Netflix. While ratings and reviews may build consumer trust, there’s more at play in a rating than just a good or bad experience.

In theory, a person builds credit by being trustworthy and is rewarded, and loses credit—and perks or access to things—when they are not.

The moral dilemma

If we take Uber as an example, each time you complete an Uber ride, you’re asked to rate your driver on a scale of one to five stars—and vice versa, you are rated as a passenger. If a driver’s rating dips below 4.6, their job can be terminated, and if a rider’s rating is low, a driver can refuse to service them. This encourages drivers and riders to be on their best behavior—to trust, as it were, that the ride will be safe and the passenger well-behaved. However, people often have a difficult time rating others accurately. According to NPR’s social science correspondent Shankar Vedantam, “People don’t rate people harshly because they don’t want to harm the other person. You might be unhappy as a passenger but you don’t want to ruin a person’s livelihood.”

Yet, let’s say you have a dangerous experience with a bad Uber driver. It would be easy to rate the driver as higher than normal in order to avoid feeling the social guilt of possibly ruining a person’s livelihood, but you might also be putting future riders at risk. NPR’s Vendantam reminds us that while the people who give accurate ratings deal with the immediate backlash of discomfort from giving a low rating to a bad driver, they ultimately provide better information to the next rider. Unfortunately, according to Vedantam, most people pass the bad driver along or decide not to rate at all.

While the people who give accurate ratings deal with the immediate backlash of discomfort from giving a low rating to a bad driver, they ultimately provide better information to the next rider.

Ratings: all or nothing

Design, copy, colors, and timing of a rating can influence how we interact with rating systems, and with so many different platforms using various concepts of user experience and gamification, rating systems can feel like a burden to users. To add to the confusion, your five-star rating might mean something different than my version of a five-star rating. Himanshu Khanna, CEO of Sparklin, drives the point that “Personality, mood, environment, urgency of the requirement, and eventual gratification” all weigh in on how a user rates something. Giving accurate, standard ratings across multiple, varying platforms (especially when it comes to rating people or their behavior) takes time and can be difficult—and let’s face it, most of us are lazy when it comes to rating and giving reviews.

But when users do take time to rate, as YouTube discovered, people are generally all or nothing; users tend to react in extremes. YouTube found that videos were either rated with five stars or one, and the rest of the community didn’t bother to react or rate—which might be why YouTube switched from a five-star rating system to a thumbs up, thumbs down system. While this binary system might ease the work of rating videos for viewers, it still doesn’t equate to accurate ratings.

[Read also: The trust economy and why it’s okay to get a bad rating]

So, what might encourage users to give accurate ratings? Explanations. Time Magazine reported on a study done by Sarah Moore, a marketing professor in Canada, in which participants who explained their positive consumer experiences ended up scoring them lower compared with people in the control and non-explanation groups, and people who explained their bad experiences ended up scoring their experiences as higher than those who didn’t explain.

“‘Explaining why a chocolate cupcake tasted so divine makes us love the cupcake a little less,” Moore noted in a statement, “while explaining why a movie was so horrible makes us hate the movie a little less.’” Understanding positive or negative events can lead us to feel less intense emotions and can result in more middling, honest reviews.

“Explaining why a chocolate cupcake tasted so divine makes us love the cupcake a little less, while explaining why a movie was so horrible makes us hate the movie a little less.”
Sarah Moore

This seems to work for Yelp, one of the more reliable crowd-sourced review forums, which forces users to give an explanation along with each starred rating. Then again, people on Yelp are usually the type of people who don’t mind taking time to leave a review because that’s what the platform is made for.

In ratings we trust

Despite the myriad rating systems out there—and how bad we are at accurately reflecting our experience—ratings and reviews are influential. According to Forbes, we take ratings seriously. “Eighty-five percent of people trust online reviews as much as a personal recommendation and every one-star increase can lead to a 5 to 9 percent increase in revenue. Online reviews are also an important ranking signal in Google’s algorithm.”

Rating systems aren’t likely to go away, and it seems we’ll continue to put our trust in these systems, but every so often, we should remind ourselves to take ratings and reviews with a grain of salt, and, at the same time, to make an effort to give more accurate, honest ratings and reviews. As for businesses, creating a rating environment that fosters better, more reliable results can also result in collecting valuable customer feedback. Rating people may not become easier on our conscience, but it does help to think that honest, thoughtful reviews might improve the rating systems in place around us. Whether or not we follow China, there’s still a clear need to be able to trust one another.

Verwante verhalen

Article
6 min read

Brewing customer service magic: A CX Moment with Dutch Bros Coffee

Zendesk spoke with two Dutch Bros CX leaders about the importance of building strong customer relationships—one cup of coffee at a time.

Article
3 min read

Winning with digital: A CX Moment with The New York Times

Zendesk sat down with Jeff Shah, VP of Customer Care at The New York Times, to learn how his team solves problems for millions of readers around the world.

Article
6 min read

Finserv, disrupted: A CX Moment with Neo Financial

Zendesk sat down with Neo Financial’s Head of Experience, Shannon Burch, to learn how she’s scaling her CX operation while creating emotional connections—and building trust—with customers.

Article
4 min read

A healthier outcome: A CX Moment with Inovalon

Zendesk chatted with Inovalon’s Associate Vice President of Customer Support, Brian Blumenthal, about the importance of giving teams the right tools to help customers.