Eye Test vs. Analytics

It’s the conversation that has captured the hearts and minds of basketball fans across the globe, one that becomes increasingly prevalent as advancements in the sport are made, and one that doesn’t look to going away in the near future; yet, it is the most pointless debate in the history of the sport.

Ever since Dean Oliver spurred the analytics revolution with his 2004 novel, Basketball on Paper: Rules and Tools for Performance Analysis, the NBA has seen a constant growth in the number of advanced statistics and one-number metrics. Since then, a (false) belief has evolved: when it comes to evaluating basketball players, someone is either an “eye-test guy” or an “analytics guy.” The evaluator either watches a five-minute highlight reel and lets his or her biases take over or skims a stat sheet without having considered any context.

There are extremists in the sport, with the two previous examples defining small subsets of the basketball population; however, these cases of dogmatism are far less common than expected. Very rarely will we come across someone who adheres to only one, and not just because even the “eye-test guys” will cite points per game at times. The difficulty of self-evaluation is corroborated in basketball with how people assume methodological principles.

I’ve held hundreds of conversations across various platforms over the years, especially ones that compare players. When I reference a one-number metric to open a debate, I’m usually told some variation of “watch the game.” When I begin with an on-court observation (for example, Hassan Whiteside’s more error-prone defense), I’m told to look at how good his impact metrics are. More often than not, these responses come from people who use a mixture of both the eye test and stats. So what’s the deal here?

The “Eye-Test Guys”

For the purpose of this exercise, an “eye-test guy” is someone who solely relies on his or her thoughts from watching basketball. This does extend to the box score, which virtually all basketball fans cite, but this example holds a strong emphasis on intuition and personal value systems.

Similar to analytics, there’s a “smart” way to eye-test games and there’s a… “risky” way to eye-test games. The more comprehensive approach places an emphasis on the aspects of the game that aren’t or can’t be tracked. It would clearly be unwise to sit through thirty-six minutes of eighty-two games to track a star player’s point per game when it could be found on Basketball-Reference with the click of a button (unless you’re a scorekeeper, of course). The second hindrance, one that’s a natural tendency of eye-testing, is to “ball-watch.” This means the viewer only watches the path of the basketball. This isn’t necessarily a “bad” thing to do, but it limits the observations the viewer can make given that so many court actions occur off the ball. Ball-watching is a large reason behind the heavy emphasis on offense in a typical evaluation.

What makes someone an “eye-test guy”? Namely, why would someone push back against the so-called advancements in basketball? A large factor is the time at which the person entered the world of basketball evaluation. If they grew up in the 1970s with points per game and field-goal percentage as the game’s leading measurements, then the concepts of Plus/Minus and RAPM will likely seem like remote hokum. Conversely, if someone enters the field in the late 2010s, which hosted the boom of impact metrics, then they are more likely to accept these figures as important and necessary measurements.

This concept of “analytical acceptance” is the driver behind “eye test versus analytics” conversations. It also explains why the analytics-oriented crowd (apart from the extremists) is usually easier to converse with and more open-minded than the traditional crowd. I’ll be fairly critical of both sides in these paragraphs, but the receptiveness to new ideas and a smaller risk of belief persistence among the analytics community does give it an edge over the traditional community in the modern era. (Although, this is really just one person’s opinion). This by no means is to say analytics are more important than the eye-test, rather those who support the former are a better fit for the contemporary state of basketball evaluation.

The “Analytics Guys”

Due to the short-lived dominance of analytics in the public eye, the proportion of “analytics guys” is far smaller than that of “eye-test guys.” Conversely to the latter, “analytics guys” don’t feel the need to directly observe the court actions of basketball games to make an informed opinion, instead opting for stats and metrics that estimate a player’s value to his team. This style is a “low-risk, low-reward” of sorts. Impact metrics do reflect a player’s value to his team more than most realize, but the largely overlooked detail is that these measurements represent some players very well and other players more poorly.

“Analytics guys” are rare, not only due to the more recent interest in advanced statistics, but because it takes a very specified introduction to evaluation of the sport. To build a relentless dependence on advanced statistics, someone must have first been acquainted with these measurements before game-watching techniques. If analytics are presented as the unbiased, holy-grail statistics made to surpass all human errors, the right mindset can develop a strong attachment. As mentioned earlier, impact metrics have varying levels of correctness, with some measures that are very good and some that embellish the confounding variables that dilute them.

It’s more difficult to make as many judgments on the “analytics guys” than the “eye-test guys” due to their rarity and short-lived reign. The largest positive of the mindset is the fairly accurate picture of value they will reference, with the largest negative being a total lack of context behind the numbers. It would be impossible to distinguish the good from the poor scores with numbers alone.

The Answer?

Analytics and the eye-test are “one and the same.” They’re both observations of what happens on the basketball court. Very little separates the two, with the main difference being how the information is displayed. Eye-testing findings will likely be in the form of detailed notes while analytics findings are broken down into one or two numbers. The “eye-test versus analytics” debate doesn’t compare apples to oranges as previous thought; it’s almost like comparing red apples to green apples.

Although a lot of people struggle to see a middle ground between the eye-test and analytics, there’s a very distinct option on this front. The eye-test is best for observing certain actions while analytics are better for others. If the goal is to judge off-ball movement, there aren’t any figures that will effectively capture it, which makes the eye-test the more appropriate route. If the goal is to judge a player’s “most-likely” influence on the game score, analytics is the superior option. I’ve had several eye-test enthusiasts claim an acquired ability to determine impact with visual methods alone, which simply isn’t true. Human minds aren’t capable of absorbing and interpreting tens of thousands of events, and doing so would be an overestimation of cognitive processing. That’s why analytics were created: to ease some of the burdens on our brains.

The bottom line is that the eye-test and analytics are each integral parts of basketball evaluation. Adhering to one method or the other will prove a recipe for disaster. They each provide important pieces toward making an informed opinion, with one being more suited for some aspects while its counterpart is better for others. Furthermore, in essence, they’re different interpretations of the same information, which makes the “eye-test versus analytics” debate the most fruitless conversation in basketball.



Leave a Reply

Your email address will not be published. Required fields are marked *