5 NBA Thoughts [#2]

(? The Ringer)

Inspired by the likes of Sports IllustratedSB Nation, and Thinking Basketball among countless others, I’m continuing here a “5 Thoughts” series. Watching and studying the NBA means there’s a constant jumble of basketball thoughts in my head. However, I rarely touch on them in detailed articles or videos, so that’s where this series comes in: a rough, sporadic overview of five thoughts on the NBA!

1. Bradley Beal and the OPOY

During a live chat discussion on Halftime Hoops, a surging topic was whether the league should implement an Offensive Player of the Year (OPOY) Award to counterbalance the defense-only counterpart. A contribution to the topic was that the OPOY Award would too closely resemble the scoring title leaderboard, stating players like Bradley Beal and Stephen Curry will inevitably win because of their scoring, so why bother creating a new award to recognize a pool we’re already aware of?

The voter tendencies could support this claim; in other words, the patterns of the voters suggest scoring is the largest factor in offensive evaluation, so in a practical sense, the opinion makes sense. To me, the more pressing question here is what that says about voter criteria and how it influences the public eye in general. There’s always been a misstep in how offense is treated: teams win games by scoring more points, so the highest-volume scorers are often the best offensive players. The poor connection is here the exclusion of other skills; namely, shot creation, passing, screening, cutting, floor-spacing, and all the crucial qualities that go into building a high-level offensive team.

Bradley Beal isn’t in the conversation as a top-5 offensive player in the league to me, and while that’s mainly in part to a lack of eliteness in multiple areas that would bolster his overall value on offense, his scoring doesn’t pop out as much to me as it does other. Efficiency is generally treated as a luxury piece, and that volume scorers with “good enough” efficiency (within a reasonable range of league-average) shouldn’t be penalized. However, an argument could be made to flip the script entirely: efficiency may be equally, if not more, important than volume.

Beal is currently fourth in the league in scoring rate, clocking in at 30.3 points per 75 possessions. (That also brings up the issue with the scoring title being assigned to the points-per-game leader: it erroneously assigns credit to high-minute players on fast-paced teams.) Thus, there are too many hindrances in placing a heavy amount of stock in how many points a player scores each game. System design, quality of teammates, the plethora of other skills that amplify a team’s offense: all evidence points toward an OPOY Award being an entirely separate entity from the scoring title.

2. Making teammates better is a myth

Raise your hand If you’ve ever seen someone argue on behalf of one player over another because “he makes his teammates better.” (If you aren’t raising your hand right now, I’d be very surprised.) High-quality facilitators are often treated in this fashion, and the cause for belief isn’t way off track: When elite shot creators and passers are setting up teammates for open field goals that would’ve otherwise been contested, the teammate’s shooting percentages will increase. Thus, the illusion that a player can make his teammates better was inspired.

The problem with this idea is that it treats a player’s “goodness” as dependent on who’s on the floor with him. If an elite spot-up shooter is paired with an all-time black hole like Stephen Curry, the openness frequency of his shots will likely increase. Conversely, if he were playing alongside someone antithetical to this style like Elfrid Payton, his shooting efficiency would probably go down. The big question here is whether the increase in scoring efficacy should be attributed to the shooter or the playmaker. To me, the answer is fairly simple given how complex most of these problems can be.

The teammate’s shooting in a vacuum is unaffected by his teammates. Namely, regardless of lineup combinations and synergies, the true skill of his shooting remains constant. A three-point specialist doesn’t go from a 40% to a 45% shooter overnight because he’s going to start playing with better teammates. Therefore, a “better” interpretation of these shooting improvements is that the playmaker is bettering the team’s shot selection (per-possession efficiency), but he’s not bettering the teammate’s shooting skill. This case applies to every other skill that creates a similar illusion.

3. Rim protection vs. perimeter defense

After spending a good amount of time in the aforementioned Halftime live chat rooms, it’s become increasingly clear that a large chunk of NBA fanatics may solely focus on man defense when evaluating this side of the ball. This directly parallels the strong focus on isolation scoring when the general public evaluates offensive skill. Just as we discussed in the first thought, these types of approaches can be extremely limiting and leave out a lot of information and skills that go toward building strong team defenses. This creates an ongoing conflict between weighing the value of rim protection and general “perimeter” defense, so which one is better?

A recurring theme that has followed with basketball analysis’s anti-education culture is the tendency to try and solve complex problems with the simplest solutions possible. (This led to the original reliance on the box-and-record MVP approach.) So when defensive skills outside of isolation defense and shot-blocking are brought up, they might be met with some reluctance. The opposing style will be used here: the most important piece of the premise here is that rim protection isn’t just limited to shot-blocking ability, as perimeter defense would be to “locking up” an opponent’s best perimeter player. The cause-and-effect relationships these skills have on the totality of defensive possessions can be much stronger than meets the eye.

Strong rim protectors in the paint can drastically alter game-planning against the opposition, meaning an increased focus on a perimeter attack can deter a large number of field-goal attempts at the rim, thus taking away the sport’s (generally) most efficient shots. Similarly, perimeter defense doesn’t just refer to a player’s one-on-one defense. How do they guard the pick-and-roll? How do they maneuver around screens (and perhaps more importantly, ball screens)? Do they make strong rotations and anticipate ball movement well or is there some hesitation? (These are just a few examples. As stated earlier, the sport is much too complex to be solved through an ensemble of steals and blocks.)

The simple answer is that it depends. A lot of people will say rim protection is generally more valuable, and in a broader sense, I would agree. The (generally) most efficient shots are taken in the paint, so to either contest, deflect, or deter these shots would certainly seem to have a greater positive effect on the team’s overall defensive performance. However, as the “truer” answer suggests, most of that value is dependent on the context of the game. During the Playoffs, we have seen a league-wide trend of non-versatile centers losing a certain degree of impact against certain postseason opponents. (I’ve discussed this a lot in recent weeks.) Versus great-shooting teams with a strong midrange arsenal, perimeter defenders will likely see an increase in value: defending some of the league’s best floor-raising offense (open midrange shots from elite midrange shooters) will put a cap on the opposition’s offensive ceiling.

A total discussion wouldn’t even come close to ending here, but with this brief overview paired with other in-depth analysis, there are pretty clear indicators that rim protection will often be more valuable than perimeter defense. This explains why players like Rudy Gobert and Myles Turner will generally be viewed as stronger positive-impact defenders than ones like Ben Simmons and Marcus Smart. However, with the increasing rise in outside shooting efficacy and offensive development, defensive versatility ranging from multiple areas on the court may become more valuable than either of them.

4. Building an RAPM prior

Regularized Adjusted Plus/Minus is an all-in-one metric that estimates the effect of a player’s presence on his team’s point differential. I’ve written about the calculations behind RAPM before, but a topic I’ve never covered in-depth is how “priors” are constructed. For those unfamiliar with these processes, RAPM uses a ridge regression (a form of Tikhonov regularization) that reduces the variability among possible APM values, regressing scores toward average to minimize the higher variance in “regular” Adjusted Plus/Minus. A “prior” is used in place of an average (net-zero) to pair with pure APM calculations.

Priors are often referred to as “Bayesian priors” because they align with the philosophy in Bayesian statistics that encourage pairing the approximations of unknown parameters with outside information known to hold value in the holistic solutions of these problems. The basketball equivalent to this would usually see APM regress toward the box score and past results, among others. Current examples include Real Plus/Minus, which regresses toward the box score. BBall-Index‘s LEBRON regresses toward PIPM’s box component. Estimated Plus/Minus regresses toward an RAPM estimator that uses the box score and tracking data (which the NBA started to track league-wide in the 2013-14 season).

The composition of these priors has stood out to me, and during my research for an upcoming video podcast on ranking impact metrics, it become apparent the use of on-off ratings was often a significant hindrance to a metric’s quality. It’s long been clear the extreme variability of on-off data could limit a metric’s descriptive power in smaller samples because of its lack of adjusting for hot streaks, lineup combinations, and minute staggerings. These confounding variables can make an MVP-level player appear to be nothing more than a rotational piece, and vice versa. Because there’s simply too much on-off data doesn’t account for, even as the skill curves begin to smooth, it’s probably on a downward trajectory in its usage among modern impact metrics.

Conversely, the more promising form of statistics seems to be tracking data. Although most of these stats weren’t tracked prior to the 2014 season, there’s a good amount of non-box data from resources like PBPStats, which could help in creating a prior-informed RAPM model that extends back to the beginning of the twenty-first century. These types of stats are estimated to have the least dependence on team circumstances among the “big” three that includes box scores and on-off ratings, which gives tracking data an inherent advantage. During his retrodiction testings of several high-quality NBA impact metrics, Taylor Snarr (creator of EPM) found evidence to suggest this pattern, and although this phenomenon hasn’t quite yet hit the mainstream, I expect the next generation of advanced stats is powered by tracking data.

5. The statistical evaluation of defense

Compared to the offensive side of the game, defense has always been a bit of a black box. It’s much harder to evaluate and even harder to quantify. (When the DPOY Award was instituted in the early 1980s, Magic Johnson would receive a good share of votes.) This is an even larger problem given the limitations of the current box score, which only extends to defensive rebounds, steals, blocks, and personal fouls. The majority of attentive NBA watchers knows the box score doesn’t track a lot of defensive information, so what statistical methods can be used to evaluate defense?

One of the up-and-coming statistics in recent years has been opponent field-goal percentage; the proportion of defended field-goal attempts that were made. These marks are seen as especially useful by a community that mostly considers isolation defense; but as a measure of the quality of contests alone, these stats are often misused. Some will say a defender played exceptionally well against LeBron James because, in the two’s interactions, James made two of seven shots. (“Therefore, Andre Iguodala is the Finals MVP,” if you catch my drift.) Not only are these measures not a definitive representation of how well a defender contested a player’s shots (location and team positioning are big factors that affect these stats), but small samples are perennially mistreated.

Anyone with some familiarity with statistical analysis knows the variability of a sampling distribution very often goes down as the sample size increases. (The common benchmark is n = 30 to satisfy a “Normal” condition for some types of stat tests.) Thus, when Kevin Durant shoots 21% on 12 attempts in a game versus Alex Caruso, it doesn’t mean Alex Caruso is therefore Kevin Durant’s kryptonite. The variability of opponent field-goal percentage, the lack of context surrounding the conditions of a field-goal attempt, and the infrequent instances in which they occur create a statistic that’s easy to abuse. If there should be one major takeaway from this post, it would be to KNOW AND UNDERSTAND YOUR DATA.

Unfortunately, a lot of lower-variation and more descriptive defensive stats (defended FG% doesn’t confirm a cause-and-effect relationship) aren’t readily available to the public, but it’s worth noting some of the advancements in the subject. A more common statistic might be deflections, charges drawn, or total offensive fouls drawn (the latter two paired with steals created total forced turnovers). However, the most exemplary case of defensive measuring I’ve seen is through BBall-Index, whose data package comes with a multitude of high-quality defensive statistics, including box-out rates. rebounding “success rate,” loose ball recovery rate, rim deterrence, matchup data, among countless others. Seriously, if you’re looking to take your defensive statistical analysis up a notch, it’s the real deal.

Impact metrics will always provide their two cents with their one-number estimates, and for the most part, they’ll be pretty good ballparking values. But intensive film studies (the building blocks for evaluating defense) may need to be paired with descriptive, intelligible defensive statistics to enhance the quality of analysis. This doesn’t make defense any easier to evaluate than offense, but relative to the alternative (steals + blocks), it’s a whole lot better.

How important are skills like volume scoring and isolation defense? Are the right players credited with individual statistical increases? And how are the futures of RAPM and defensive measurements coming into play right? Leave your thoughts in the comments.



Leave a Reply

Your email address will not be published. Required fields are marked *