LetsRun.com Shoe Grades

What is a LetsRun.com Shoe Grade?
LetsRun.com Shoe Grades, or LRC Grades, are shoe ratings that are adjusted to account for:
  1. The number of runners who reviewed a specific shoe.
  2. The characteristics of the particular runners who reviewed a shoe.
Why make adjustments? Why don't you just show average reviewer ratings?

Imagine two shoes. The first has 5 reviews and an average overall rating of 9.0. The second has 50 reviews and an average overall rating of 8.5. Which shoe is better? The first shoe has a higher average rating, but it only has 5 reviews. The second shoe has a lower average rating, but it has 10 times as many reviews.

LRC grades are generated from statistical modeling techniques designed to account for sample size-related uncertainty. These scores help remove the guesswork for buyers trying to figure out how two shoes with different numbers of reviews compare to each other.

You said LRC Grades account for the types of runners who review a shoe. Why?

We have a data set consisting of thousands of shoe reviews from some of the fastest and most educated runners in the world. These data include not just shoe ratings, but also information about who reviewed each shoe. We have analyzed these data to figure out if the ratings reviewers give their shoes are related to characteristics such as how many miles per week they run, how fast they train, and whether they report themselves to be injury prone.

Overall, shoe ratings do not vary drastically based upon who reviews them. However, there are some differences. For example, runners who train at a high level and who are unlikely to suffer injuries, on average, rate their shoes slightly higher. Conversely, runners who are unlucky enough to be injury prone are less likely to rate their shoes as highly.

Consequently, if by chance a shoe with relatively few reviews happened to be reviewed by a handful of injury prone runners, it may be rated lower than if it were reviewed by a handful of high-level runners who rarely suffer injuries.

When shoes have a large number of reviews, these adjustments do not matter very much. But they can be helpful when trying to compare shoes with fewer reviews.

Do you adjust these scores based upon things like reviewers' foot strike or pronation?

We do not make adjustments to shoe grades based upon runner characteristics that guide buyers' shoe purchasing decisions.

For example, runners who overpronate are often encouraged to buy stability shoes designed to correct for overpronation. Making adjustments to LRC Grades based upon reviewers' pronation would lead to a confusing situation where the LRC Grades would convey how a shoe designed for overpronators would be rated by an "average" runner (i.e., one who is less likely to overpronate).

I don't buy it. Can I just see the average ratings?

Of course. Every shoe page also presents "raw scores," including the mean and median of each sub-rating for a specific shoe.

What type of statistical methods do you use to make these adjustments?

We crunched the data using cutting edge machine learning and artifical intelligence methods, relying upon several data centers of computational power. Just kidding. We have to keep some things a secret, but we can tell you that we use techniques common to researchers who evaluate and rank entities like people, organizations, and products.