Author: chromeder


  • The NBA’s Top 10 Offensive Players of 2021

    The NBA’s Top 10 Offensive Players of 2021

    (? The Ringer)

    Nearing the end of the 2021 Playoffs, with a whole new season of information on the league’s top players, we cycle around to yet another series of rankings. Rather than evaluating a player based on his overall impact, today’s edition starts with ballparking the value a player adds on offense alone. Because the rules and practices of the NBA are currently slanted toward offense, the best offensive players have a significantly greater impact on the scoreboard than the league’s best defensive players. And because offensive skills and tactics continue to develop and grow, ranking these players becomes an even more complex task. So what are we looking for here?

    Criteria

    Unlike some lists, this will not rattle off the league’s top volume scorers. While teams win by scoring more points than the opposition, there is a multitude of other ways a player can influence his team’s scoring than taking the final shot. Keep that in mind if these rankings appear to be less fond of players like the Greek Freak, Bradley Beal, or Joel Embiid. While the conversation of exactly how valuable certain offensive skills are is a much larger one than today’s, there will be some themes that pop out during the list.

    Contrary to popular opinion, volume playmaking will be viewed in a slightly rosier lens, and that’s because a shot created has a greater expected value than a shot taken. Because an offensive possession is all about generating the most efficient shot, these mega-shot creators who can also score themselves will be ranked higher than the more flashy, self-generated scoring type of stars that historically receive a lot of praise. I’ve written on the topic of volume versus efficiency before, and while there’s more to a player’s scoring value than these two measuring sticks, they’re valued as roughly (key word: roughly) equal.

    Because offense is so often reduced to volume scoring, this list may appear to excessively praise great passers and off-ball players, but this is because they both contribute toward high-value offensive possession. Passing exploits the mishaps in a defense while off-ball cutting or offensive rebounding pressure the rim and generate a ton of second-chance opportunities. The overarching point here is that all offensive skills are at play here, and they’ll be weighed appropriately based on evidence of how valuable they are and, even more important to how they affect bad teams, how much they affect good teams.

    Honorable Mentions

    Before diving into the list, let’s go over some honorable mentions, and why the list caps off at ten players. The first two players out, and the ones I saw as making the strongest arguments to slide in at the back end of the top-ten, are Giannis Antetokounmpo and Karl-Anthony Towns. While they may raise the floor for teams are well as a few players on this list, they were lacking in major offensive categories that made the final cut just a bit easier. Players that were also in contention for the top-fifteen include but are not limited to: Devin Booker, Bradley Beal, Joel Embiid, Paul George, Donovan Mitchell, and Zion Williamson.

    The last main sticking point for this list is that it is extremely fluid. Meaning, players who are close to one another are, for the most part, interchangeable to a reasonable degree. This list is also meant to act as more of a starting point than an ending one. A “part one” of sorts that can be added onto alongside a new influx of information and the benefit of hindsight. So, without further ado, I present my estimation of basketball’s ten-best offensive players today.

    The List

    10. Trae Young

    During the past two seasons, Trae Young has evolved into one of the league’s premier shot creators, posting some of the highest estimates on record. He led the league with an estimated 18.5 shots created for teammates in the regular season, which held at a steady 16.4 through 15 Playoff games. Young’s passing was the catalyst to unlocking the value from his shot-creating. An astounding 85% of his assists in the regular season led to layups or three-pointers. While he doesn’t have the catch-and-shot or off-ball proficiency to optimize his fit alongside other perimeter talents, Young’s floor-raising style has proven him to be one of the NBA’s most dangerous offensive weapons.

    9. Kyrie Irving

    Despite the off-court drama, Kyrie Irving continues to string together some of the more underrated offensive campaigns in recent history, and there are a ton of positive indicators for him: He was +4.6 in offensive Estimated Plus/Minus, +4.1 in offensive LEBRON, and +3 in offensive Real Plus/Minus. Unlike his predecessor on this list, Irving is a great catch-and-shoot scorer, converting on 43% of these attempts in the regular season. Paired with elite finishing that takes advantage of the spacing created by the star talent around him, Irving can maintain a lot of the impact he’d use to strengthen the heights of poorer teams on championship-level offenses. That was the differentiator between him and a floor-raising stud like Trae Young.

    8. Damian Lillard

    Lillard is one of the most special on-ball talents in the league right now. As the clear-cut primary ball-handler for Portland, he runs a lot of high pick-and-roll that unlock: 1) his elite floor spacing and shooting gravity that pulls defenses far beyond the three-point line, and 2) open driving lanes that allow Lillard to pressure the rim. Because he pairs two extremely effective methods of scoring with the byproduct of high-level shot creation, Damian Lillard is very close to players multiple spots ahead of him on this list. His offensive impact metrics may have been even higher than his represented level, such as his +6.7 in the offensive component of EPM and +8.3 O-RAPTOR. Despite an elite Playoff series on paper: 32 points per 75 possessions on +8% True Shooting, there are still some lingering questions that ask how defenses can effectively scheme around Lillard in the Playoffs; so if forced to choose, he’s a notch under…

    7. Kawhi Leonard

    I’m splitting hairs between Leonard and Lillard, but the main sticking point with Leonard is that his scoring is more resilient in the postseason. His 48% shooting from the midrange in the regular season (57% in the Playoffs) gives him the classic three-level scoring reprtoire that Lillard (38% in the regular season and 15% in the Playoffs) doesn’t quite have. As the lead ball-handler with the Clippers, Leonard’s playmaking has shined as bright as ever. He doesn’t have the high-leverage assist power of the game’s very best, but his respectable passing keeps defenses at bay for him to punish drop coverages and unlock his incredible scoring repertoire.

    6. Kevin Durant

    We’ve had the benefit of viewing Kevin Durant through the lenses of various different roster constructions. The only problem: stints have been spaced far apart from one another. There are continuous questionings of whether he could handle the load as a primary ball-handler against elite defenses in the Playoffs, but he provides a ton of value as a secondary star as well. Durant’s all-time-level shooting and isolation scoring allow his scoring to fit well on most types of teams. And because he adds smaller amounts of value through passing and gravity as a roller, Durant still remains an All-NBA player due to offense alone. My confidence level is smaller with Durant, mostly because I wish there were more of him for us to see, but my “most likely” spot for him ends up being sixth.

    5. Luka Doncic

    A second consecutive player who’s very hard to rank, Luka Doncic. There’s a good argument that Doncic is currently shouldering the largest offensive load in NBA history, and he’s handling it extremely well. His creation estimates were second only to Trae Young in the regular season, and he placed first in the quick scoring proficiency model I whipped up some time ago. Because he creates so much offense through his individual actions, Doncic might be the best floor raiser in the league today. But because he can much up possessions by holding the ball later in the shot clock, and due to a lesser-developed off-ball game, my only concern is how well he could maintain that value if he were playing alongside another ball-dominant guard. This is the lowest I could see Doncic based on how incredible he’s been. (If it isn’t already obvious, these rankings are really hard.)

    4. LeBron James

    Once again, another player with lingering question marks. Last season, James made a great argument as the best healthy offensive player in the league with how well his motor was repaired for the Playoffs, allowing him to punish teams at the rim. His passing still continues to peak, but we saw his regular-season Passer Rating dip from historical heights to 8.3, suggesting he’s lost a bite of his passing value from a statistical perspective. There are also concerns of health and aging, so it’s difficult to fully assess healthy LeBron’s offensive prowess. But because I think he still fits on a good amount of teams, I’ll slot him in at fourth, but this is a bit of an optimistic outlook. Lower rankings are perfectly justified.

    3. James Harden

    Harden isn’t a traditionally scalable player, but he’s shown time and time again that he can provide oodles of impact on good teams. The only question with this is how much of his teammates’ roles are being sacrificed to incorporate Harden and his perennially league-leading times of possession. However, it has become clear his high-level shot creation will remain effective alongside other perimeter stars. Harden’s scoring took minor tolls from both a volume and efficiency standpoint, but he still averaged a steady 25 points per 75 on +4% relative True Shooting. I’ve gone back and forth between him and mate Durant for the past few months, but I went with Harden because Durant’s raw performances are far more likely to be the results of his being the beneficiary of optimal roster construction.

    2. Nikola Jokic

    The razor-sharp battle for the top spot is ever-so-slightly lost by Jokic in my eyes. (I’ll explain more later.) He carried over his all-time passing capabilities from the 2019 and 2020 seasons, but managed to perfect the craft even more. Jokic’s half-court passing and creation reached career highs, enhancing the Nuggets’ offense through the layups-and-threes shot selection, panning out countless assists to the paint and the corners. The full-court was his tapestry, and Jokic painted it with his passes as he hit leaking teammates as if it were target practice. When he wasn’t fighting for open position in the middle of the floor or screening for teammates, which added to his off-ball value, Jokic’s increased scoring kick took his offense toward historical levels. With a cleaner form and positive signals that accompany his shooting spikes, Jokic’s three-level scoring and league-leading passing create a combination that led to one of the greatest offensive seasons in history and a deserving MVP.

    1. Stephen Curry

    Narrowly edging out the MVP for the top spot on this list is Steph Curry, who manages to rack up more and more MVP-caliber seasons in Golden State. The argument that 2021 was his peak season is valid in that this very well might have been the best season of Curry’s career in a vacuum. While he loses some three-point dominance as the outside shot continues to evolve, Curry’s insane gravity unclogs the middle of the floor unlike any player ever. The classic images of teams sending traps on the perimeter early into the shot clock are great representations of the difficulties surrounding defenses scheming around Curry. Statistics like Box Creation underrate players who aren’t outliers as floor spacers on paper (Shaquille O’Neal), and while Curry doesn’t pass out of traps exceptionally well, his floor-spacing might be the most effective catalyst for a championship-level offense.

    Content Update

    To end the list, I’ll give a quick update on the content drought as of late. It’s been 40 days since my last NBA article, so while there hasn’t been the same writing frequency as of late. there are more types of content in the works. I’ll likely have some video content rolling out in the near future related to some cognitive phenomena in evaluating players and individual breakdowns of current and historical player seasons. Until then, I hope today’s article was a solid exchange of information on the league’s top offensive players.


  • How Matt Chapman’s Two-Way Play Makes Him an All-Star

    How Matt Chapman’s Two-Way Play Makes Him an All-Star

    Matt Chapman burst onto the scene in 2018 as an All-Star level player, proving strong abilities as both a hitter and a fielder, resulting in two consecutive years finishing at least seventh in AL MVP voting. As his batting began to stabilize, Chapman remained the anchor of one of the MLB’s best defensive teams and consistently providing value to Oakland that would rank him among the very best third basemen in the game. But as of late, Chapman’s performances have seemed more streaky and less effective. His overall statistical profiles on both sides of the ball have been declining since the 2019 season, so is this truly an indicator of a declining talent, or is Matt Chapman still on track to become an All-League baseball player?

    Hitting

    Chapman’s value as a hitter has been a bit puzzling over the last four seasons. During his first full season in 2018, his batting profile spiked at very desirable heights. His 2017 hard-hit rate (EV > 94 MPH) of 43.6% jumped to 53.8%, and he consistently fared well in hitting line drives while limiting ground balls and fly balls. Weighted Runs Created (wRC), which estimates a player’s value as a batter using box score statistics in conjunction with run expectancy values for various events, pegged Chapman in its standardized counterpart at 139 up from 110 the previous season. A season later in 2019, that value dropped to 126. A season later, in a mere 37 games, Chapman was still above league-average at 117. However, in 60 games played in 2021, he’s fallen to 88.

    Baseball can be a streaky sport, with even season-long runs of statistics not being totally indicative of a player’s “true” value, so has Chapman’s offensive skill dropped off in recent years, or is there an element of luck at play?

    The immediate eyesore in Chapman’s statistical profile is his batting average, which has dropped at least 17 percentage points in each season since 2018, and this statistic seems to have some type of relationship with his batting average on balls in play. Chapman’s .278 season saw 33.8% of batted balls result in hits, which immediately fell to .270 the following season without any major change in his zone swing portfolio.

    We actually see his tendency to chase low balls decline in the following season, suggesting he almost certainly received the benefit of chance with his batting average in 2018. That means, while we see a significant decline in the raw numbers, his skill level is very likely more stable than these numbers indicate. However, there is one prominent aspect of Chapman’s pitch selection that can explain his shortcomings in the batter’s box as of late: massive changes in run value in certain swinging areas.

    There has basically been no change in how Chapman attacks various qualities of pitches, with the pattern of an ever-so-slight increase in overall swing rate evolving over the past four seasons. However, as stated earlier, some of these areas of Chapman’s swings appear to be bleeding value. The approximated run value in certain listed zones have been fluctuating:

    A degree of instability across zones is clear, but the exact degrees vary across zones. Chapman’s estimated run value from pitches in the shadow zone, which envelops the heart of the strike zone while extending slightly beyond it, has been relatively stable over the past four seasons. However, Chapman appears to have lost a considerable amount of impact as a hitter in each of the other zones. The largest yearly drop-off in a zone was between the 2019 and 2020 seasons, during which his value in the chase zone severely declined, but it is worth noting a lot of flattened trends in the last two seasons are in some form byproducts of shorter or incomplete seasons.

    An important skill of Chapman’s to recognize is how he provides most of his value as a hitter; and interestingly, it’s not necessarily by his ability to hit the ball, but to take pitches. Expectantly, in zones outside the heart of the strike zone, Chapman usually loses runs for his team when he doesn’t take the pitch. (It’s worth noting Chapman added an estimated 20 runs through swings in the heart of the strike zone, followed by three seasons of 9, 4, and -8 runs, which serve as indicators of the potential “luck” factor in Chapman’s batting output.) However, Chapman’s selectivity with pitches outside the strike zone actually adds a considerable amount of value to his team’s offense.

    The aforementioned -8 run value from the pitches Chapman faced in the heart of the strike zone is a compelling figure alongside the previous three seasons, and likely signals some type of change in either Chapman’s hitting tendencies or opposing defense’s reactions to such hits. While his proportion of batted balls hasn’t changed much, other aspects of his batting portfolio have. Matt Chapman has always been a great hard hitter, hitter 53% or more of his batted balls leaving at an exit velocity of 95 MPH or higher from 2018 to 2020. But in 2021, that percentage has fallen below the 2017 to 2021 league average of 39%, settling at 36.2%.

    As his other hitting tendencies are functionally similar, there seems to be a sneaky good chance a large proportion of Chapman’s hits in the heart of the strike zone are inadvertently resulting in pop-ups. Since these pitches aren’t positioned at angles that provide an inherently greater or less chance of the resulting hit being a pop-up, the increased fly ball percentage has a promising likelihood of being a product of poorer luck. Thus, I’m a bit higher on Matt Chapman’s slugging ability than the number suggests, and perhaps if all the weird recent circumstances hadn’t come into play, his batting profile would have regressed to values that are more indicative of his true value as a hitter.

    Fielding

    During the past four years, Chapman has been awarded two Gold Gloves as an AL third baseman and two Platinum Gloves, seemingly on the track to become one of baseball’s all-time great fielders at the position. His renowned defensive game is centered around Chapman’s all-world fielding range.

    Chapman is known to strictly play the roles of a left-side infielder, spending heavy innings at third base while seldom moving to shortstop. The above chart shows Chapman’s starting positions from the 2018 to 2021 seasons (through June 6, 2021), displaying a healthy diet of movement and adjustments that would be expected from an everyday third baseman; but the more interesting aspect of his fielding is where he finishes.

    Chapman’s estimated 41 outs above average were surprisingly dispersed across the geometry of the field. He stays within the distinct vicinity of the third-base territory but extends his reach out past the foul lines and a considerable way into the outfield along with his bunt protection and coverage versus shorter batted balls. As explored in his film study, Chapman pairs elite abilities as a fielder with absurd levels of versatility in how he protects such large areas of the field for the Athletics.

    His ability to protect the area of the field extending out to the foul line is the focal point of his fielding, and it makes ground ball hits extremely difficult to convert on for opposing teams. Chapman’s fielding on film exhibits some of the most impressive glove precision in the game today, eliminating the need to provide a lot of value through a quicker recovery time.

    Unlike other third basemen, Chapman shows a consistent effort between the catch-to-throw transition to effectively block the ground ball and square his body for the out to first. His aforementioned recovery time is shown to be split into two parts: one, the shorter one, involves his play during hits that extend his range. The time between his interception of the ball and when he’s back on his feet is among the quickest in the league, alleviating the stress for the period that follows. Chapman is shown to take slightly more time to position himself for the throw than other infielders, but it mitigates some of the wildness that comes with an arm as strong as his.

    However, Chapman displays a strong tendency to augment his fielding motion to adapt to a given play. Here, he blocks the ground ball a significant portion to his right and splits the time between recovery and throw roughly equally. This defensive versatility allows Chapman to function at an elite level in many types of run situations. Paired with his incredible arm speed and maintenance, and Chapman demonstrates a clear All-League skill in his fielding.

    Chapman adds to his bag of tricks as a ground-ball fielder with the spin move. The fluidity of his hips paired with his glove placement makes him one of the best defensive playmakers in baseball. Here, he turns a relatively difficult ball in play into a safe out through his full rotation upon fielding the ball, clearly shortening the time necessary to make the throw and furthering the idea that Chapman protects a larger area of the field than arguably any other player at his position.

    A strength of Chapman’s that separates him from other third basemen is how well he protects foul territory. He makes similar plays on line drives or looping hangers that would clearly appear to be in unattainable descent, yet the value Chapman adds from converting on these plays saves a considerable number of runs over the course of a season. This play demonstrates his ability to travel into the seating area, but as long as the batted ball is within the horizontal range of his third base positioning, he’s a consistent candidate to make the catch.

    There are only two prominent weaknesses in Chapman’s game, both of which are rarely at play. Because he has the tendency to shield the ball with mainly his glove rather than blocking it with his entire body, the angle at which his glove is positioned proves crucial. This means a downturned glove will allow more hard grounders to sneak by, and his torso won’t be in positioning to halt its movement.

    Chapman has one of the strongest arms among position players in the MLB. During his time with the U.S. national college team, he threw pitches that were tracked at a velocity up to 98 miles per hour and was perennially recognized as having one of the strongest arms in the Minor Leagues before his rookie year. With that, however, comes the greater risk of an overthrow. Chapman appears to actively combat this, meaning more of his throwing errors actually require first basemen to scoop his throw rather than leap for it. Although a testament to Chapman’s defensive mind and awareness, there isn’t an immediate remedy for his short-arm throws.

    Impact Evaluation

    Even everyday watchers of the Athletics during Chapman’s first few seasons were taken aback at how quickly he was able to ascend to elite territory among the MLB’s best players. All-in-one estimates of aggregate value, mostly Wins Above Replacement models, thought very fondly of Chapman during the 2018 to 2019 stretch, which was still likely the peak of his actual abilities as both a batter and a fielder.

    Chapman peaked at elite heights, which suggested he had even surpassed the typical All-Star electee and was entering All-League territory, which would put him among the top-16 or so players in the MLB. Because there still a certain number of question marks with his offense (is his pop-up spike a function of skill loss or bad luck?) and how much slack his fielding can pick up, I’m not quite ready to name him to my All-MLB teams. But there’s simply too much evidence that suggests he’s a bonafide All-Star, and when he’s healthy, perhaps even more.


  • The 10 Best NBA Impact Metrics

    The 10 Best NBA Impact Metrics

    It’s been a long, windy journey to get here, one that started with plans for a video. (That wouldn’t have been enough time to discuss all ten metrics.) Then I recorded a podcast that ended up being just under an hour and a half long. I’m hoping to fall somewhere in the middle here, to provide as much information possible in a digestible amount of time. Ladies and gentlemen, I present to you (finally…), the 10 best NBA impact metrics.

    Criteria

    Ranking impact metrics proved to be no easy task. To limit errors that would come from an arbitrary approach, I chose to run with a very strict criterion:

    • Descriptivity

    To qualify for the list, the impact metric had to (at the very least) be a measure of the past, or what has already happened. There will be some metrics in the rankings that enlist predictive techniques as well. But as long as past data is used to also measure the past, it checks this box. It’s also worth noting most metrics that aren’t strictly predictive don’t inject themselves with predictive power for traditionally “predictive” purposes. Model overfitting is a consistent problem among pure Plus/Minus metrics, meaning scores can severely change from team to team even if the quality of the player stays the same. Combatting these phenomena will actually create some clearer images of the past.

    • Type of Measurements

    Because almost every “good” modern metric is based in some way on Adjusted Plus/Minus (RAPM), which employs philosophically fair principles, I figured it would be “fairest” to evaluate metrics based on their adherence to the original “ideology” of RAPM: to measure a player’s impact on his team in a makeshift vacuum that places him alongside average teammates while facing similarly average opponents. Because this approach would, in theory, cancel out a lot of the noise that stems from extreme team circumstances to measure the player independent from his teammates, impact metrics are judged on how they align with these ideas. (Impact metrics are distinct measures of value to only one team, but some will be able to move the needle more in overcoming problems like these.)

    • No Sniff Tests

    A lot of NBA critics or fans who aren’t mathematically inclined will often skim leaderboards for a metric to see how it aligns with their personal lists or player rankings. Because this approach places too much stock in prior information, and a lot of the critics may not actually evaluate players well, the sniff test is not really going to help anyone judge a metric. For this list, all specific player placements are set aside to only view the metric from a lens that focuses on how they perform in the aforementioned criteria.

    • Availability

    This doesn’t concern how the metric is judged itself, but the last qualification for a metric to appear on this list is its current availability. A metric I reviewed for this list called “Individual Player Value” (IPV) may have held a spot on the list, but there were virtually no opportunities to view the metric’s results from recent seasons. Thus, all metrics on this list were available (not necessarily free to the public but in the public knowledge) through the beginning of the 2021 season. If it isn’t clear, I really wanted to include PIPM here.

    • Modern Era Metrics

    Not all metrics on this list can extend as far back in the past as others. Most will be available from the 2014 season (when the NBA first started recording widespread tracking data) onward, while some can be calculated as far back as turnovers can be estimated (basically as far back as the institution of the shot clock). Because this really takes a “modern era” approach to evaluating these metrics, only a metric’s performance and value in the 2014 season and beyond will be in consideration during these rankings. So, for example, PIPM’s shaky nature in season predating Plus/Minus data is out of the equation here.

    • Disclaimer

    People can use impact metrics improperly during a debate all the time, but the most specific case I want to show can be explained by the following example. Let’s say, hypothetically, LeBron James finished as +7 in LEBRON in 2021 and +8 in BPM. If someone instigates a conversation with the BPM score, the interlocutor may provide the +7 LEBRON as a “better” or “more meaningful” representation of James. This is not a good way to go about comparing scores in impact metrics. Different metrics sway toward various playstyles, team constructions, etc. Just because LEBRON is a “better” metric (this shouldn’t really be a spoiler),  it won’t measure every player better than, say, BPM.

    List Structure

    If only this were as simple as only needing one list… Because different metrics treat different sample sizes differently, and the time period during which a metric is taken affects its accuracy relative to other metrics, I’ll split this list into two. The first, which will include the condensed metric profiles, assesses the metrics’ performances across three (full) seasons or more. Three years is the general threshold for stability, a point at which scores aren’t significantly fluctuating. The second list will evaluate metrics in a sample taken within a single season. Since players will often be analyzed using single-season impact metrics, this distinction will hopefully separate some of the metrics’ strengths and weaknesses in various environments.


    10. Luck-adjusted RAPM

    Developer: Ryan Davis

    Based on the original works of Regularized Adjusted Plus/Minus (RAPM), Ryan Davis added a prior to his calculations as a “luck adjustment.” It’s not a traditional prior that would, for example, use counting or on-off statistics to bring in outside information we know to hold value. Rather, the adjustment normalizes a player’s three-point shooting to his career average based on the location of the shot. Free-throw shooting is also normalized to career average. I’m particularly low on the function of the prior because, to me, it would make more sense to adjust teammate and opponent performance instead (what is done in luck-adjusted on-off calculations).

    My largest concern is that long-term samples of LA-RAPM will struggle to capture improvements over time. And if someone were to map out a player’s career, it would probably be too smooth, and not in a good way. Because the career averages are used, it might be a bit better in evaluating career-long samples as a whole, but it’s not going to measure individual seasons or even long samples much better than RAPM with no prior. The ideology and the processes behind the metric are impressive, but their applications seem a bit wonky to me.

    9. NPI RAPM

    Developer: Jeremias Engelmann

    The predecessor to every single other metric on this list, non-prior-informed RAPM was Jerry Engelmann’s improvement on Daniel Rosenbaum’s Adjusted Plus/Minus (APM), an estimate of the correlation between a player’s presence and the shift in his team’s Net Rating. Although a promising metric, APM was never truly built to be used in practice because of its inherent noisiness and high-variance solutions to linear-system appeasements. Englemann employed ridge regression, an equal-treatment form of Tikhonov regularization in which a perturbation form of traditional OLS appeasement uses various degrees of lambda-values (nowadays found through cross-validation) that suppress the variance of APM coefficients and draw all scores toward average, or net-zero.

    A lot of great analytical minds will say long-term RAPM is the cream of the crop of impact metrics. However, as was with APM, it’s still unavoidably noisy in practice. And since players are treated entirely as dummy variables in the RAPM calculations, devoid of any causal variables, my confidence in the accuracy of the metric is lower than others. RAPM is built to provide strong correlations between the players and their teams, but due to a lack of any outside information creates a greater level of uncertainty regarding RAPM’s accuracy, I’m going to rank it near the back end of this list. However, I have it higher than Davis’s luck-adjusted version for the aforementioned reasons relating to career mapping.

    8. Basketball-Reference Box Plus/Minus

    Developer: Daniel Myers

    The signature metric of Basketball-Reference and probably the most popular Plus/Minus metric on the market, Daniel Myers’s BPM 2.0 is arguably the most impressive statistical model on this list. There are some philosophical qualms I have with the metric, which I’ll discuss later. BPM continues the signature Myers trademark of dividing the credit of the team’s success across the players on the team. However, this time, he updated the metric to include offensive role and position on his team to add context to the environment in which a player accrued his box score stats. This means assists are worth more for centers, blocks are worth more for point guards, etc.

    BPM incorporates a force-fit, meaning the weighted sum of BPM scores for a team’s players will equal the team’s seasonal Net Rating (adjusted for strength of schedule). However, a team’s NRtg/A uses a “trailing adjustment,” which adds a small boost to good teams and a downgrade for poor teams based on how teams often perform slightly better when they are trailing in the game. The aforementioned gripes are mainly based on how BPM treats offensive roles. The metric will sometimes treat increments in offensive load as actual improvements, something we know isn’t always true. There are also some questions I have on the fairness of measuring offensive roles relative to the other players on the team.

    7. Backpicks Box Plus/Minus

    Developer: Ben Taylor

    I’ve gone back-and-forth between the two major Box Plus/Minus models for some time now, but after learning of some new calculation details from the creator of the metric himself, I’m much more comfortable in leaning toward Ben Taylor’s model. He doesn’t reveal too much information (in fact, not very much of anything at all), even to the subscribers of his website, but he was willing to share a few extra details: BPM uses two Taylor-made (double entendre?) stats: Box Creation and Passer Rating, estimates of the number of shots a player creates for teammates every 100 possessions and a player’s passing ability on a scale of one to ten. This is a very significant upgrade over assists in the metric’s playmaking components and certainly moves the needle in overcoming team-specific phenomena that don’t represent players fairly.

    Backpicks BPM also trains its data on more suitable responses, something I didn’t mention in the Basketball-Reference profile. Myers’s model uses five-year RAPM runs, notably decreasing the metric’s ability to measure stronger players. Conversely, Taylor’s two-to-three-year runs include stronger levels of play in the training data, meaning All-NBA, MVP level, and beyond caliber players are better represented. Teammate data is also toyed with differently. Rather than measuring a player strictly within the confines of his roster, the Backpicks model makes a clear attempt to neutralize an environment. To put this into perspective, Myers’s model thought of Russell Westbrook as a +7.8 player in 2016 (with Durant) and a +11.1 player in 2017. Taylor’s model saw Westbrook as a +7 player in both 2016 and 2017.

    6. Player Impact Plus/Minus

    Developer: Jacob Goldstein

    As was with me when I was first learning about impact metrics, introductory stages to basketball data will often lead people to believe PIPM is arguably the best one-number metric in basketball. It combines the box score with on-off data and has very strong correlative powers relative to its training data. But when I started to look under the hood a bit more, it was clear there more issues than immediately met the eye. Compared to other box metrics, PIPM’s box component is comparatively weak. It uses offense to measure defense, and vice versa. It doesn’t account for any positional factors and the response is probably the most problematic of any metric on this list. I’ll discuss this more.

    Recently, I’ve been leaning away from on-off ratings. They’re inherently noisy, perhaps even more so than RAPM, easily influenced by lineup combinations and minutes staggerings, which can make an MVP level player look like a rotational piece, and vice versa. The luck adjustment does cancel out some noisiness, but I’m not sure it’s enough to overcome the overall deficiencies of on-off data. PIPM is also based on one fifteen-year sample of RAPM, meaning the high R^2 values are significantly inflated. Again, this means very good players won’t be well-represented by PIPM. This excerpt may have sounded more critical than anything. But the more I explore PIPM, the most I’m met with confounders that weaken my view of it. Perhaps the box-only metrics are slightly better, but I’ll give PIPM the benefit of the doubt in the long term.

    5. Augmented Plus/Minus

    Developer: Ben Taylor

    Augmented Plus/Minus (AuPM) similarly functions as a box score / on-off hybrid. It incorporates the Backpicks BPM directly into the equation, also roping in raw Plus/Minus data such as On-Court Plus/Minus and net on-off (On-Off Plus/Minus). It includes a teammate interaction term that measures the player’s Plus/Minus portfolio relative to other high-minute teammates, and the 2.0 version added blocks and defensive rebounds per 48 minutes. There’s no direct explanation as to why those two variables were included; perhaps it included the regression results a bit more, despite having likely introduced a new form of bias.

    Pertaining to the AuPM vs. PIPM debate, it should be abundantly clear that AuPM has the stronger component. And while PIPM bests its opponent in the on-off department, the inclusion of shorter RAPM stints in the regression for AuPM means more players will be measured more accurately. So, despite arguably weaker explanatory variables, the treatment of the variables leans heavily in favor of Augmented Plus/Minus.

    4. RAPTOR

    Developers: Jay Boice, Neil Paine, and Nate Silver

    The Robust Algorithm using Player Tracking and On-Off Ratings (RAPTOR) metric is the highest-ranked hybrid metric, meaning every metric higher uses RAPM calculations in its series of calculations, not just as a response for the regression. RAPTOR uses a complex series of box scores and tracking data paired with regressed on-off ratings that consider the performances of the teammates alongside the player and then the teammates of those teammates. The regression surmounts to approximate one six-year sample of NPI RAPM. My high thoughts of it may seem inconsistent with the contents of this list. However, one major theme has made itself clear throughout this research: tracking data is the future of impact metrics.

    Despite a “weaker” response variable, RAPTOR performs excellently alongside other major one-number metrics. During Taylor Snarr’s retrodiction testing of some of these metrics, which involved estimating a team’s schedule-adjusted point differential (SRS) with its players’ scores in the metrics from previous seasons (all rookies were assigned -2), RAPTOR was merely outperformed by two metrics, both prior-informed RAPM models. This is a good sign RAPTOR is assigning credit to the right players while also taking advantage of the most descriptive types of data in the modern NBA.

    3. Real Plus/Minus

    Developers: Jeremias Engelmann and Steve Ilardi

    Real Plus/Minus (RPM) is ESPN‘s signature one-number metric, touted for its combination of descriptive and predictive power. According to the co-creator, Steve Ilardi, RPM uses the standard series of RAPM calculations while adding a box prior. This likely means that, instead of regressing all the coefficients towards series as explained in the NPI RAPM segment, Engelmann and Illardi built a long-term RAPM approximation, which then acts as the datapoints the player scores are regressed toward. Value changes and visual instability aside, RPM is among the premier groups of metrics in their ability to divvy the right amounts of credit to players, having finished as the metric with the second-lowest SRS error rate in Snarr’s retrodiction testing.

    2. LEBRON

    Developers: Krishna Narsu and “Tim” (pseudonym?)

    BBall Index‘s shiny new product, as new as the latest NBA offseason, the Luck-adjusted Player Estimate using a Box prior Regularized On-Off (LEBRON) metric makes a tangible case as the best metric on this list. Similar to RPM, it combines the raw power of RAPM with the explanatory power of the box score. LEBRON’s box prior was based on the findings from PIPM, but upgrades the strength of the model through the incorporation of offensive roles, treating different stats differently based on how certain playstyles will or won’t accrue the stats. Three-point and free-throw shooting is also luck-adjusted in a similar fashion to PIPM’s on-off ratings to cancel out noise.

    Part of what makes LEBRON so valuable in a single-season context is its padding techniques, which involve altering a player’s box score profile based on his offensive role. For example, if Andre Drummond shot 45% from three during his first ten games, LEBRON’s box prior will take his offensive role and regress his efficiency downward based on how he is “expected” to stabilize. This makes LEBRON especially useful in evaluating players during shorter stints, and while these adjustments aren’t perfect, the metric’s treatment of box score stats probably uses the best methods of any metric on this list.

    1. Estimated Plus/Minus

    Developer: Taylor Snarr

    I don’t want to say Estimated Plus/Minus (EPM) runs away with the top spot on this list, because it doesn’t, but it’s clear to me that it’s the best widespread impact metric on the market today. Roughly as young as LEBRON, EPM is the product of data scientist, Taylor Snarr. As noted by RPM co-creator Ilardi, EPM is similar to RPM in that it uses RAPM calculations to regress toward the box score, but also includes tracking data. The tracking data, as has been shown with RAPTOR, makes all the difference here. During his retrodiction testings, Snarr constructed linear regression models to estimate the effect of lineup continuity on the metric’s performance.

    To the envy of every other metric, EPM’s reliance on lineup continuity was estimated to be roughly half of the runner-up metric, RPM. It may not sound like a crucial piece of information, but given EPM’s model strength and some of these types of metrics’ largest problems, EPM performs fantastically. It’s also worth mentioning EPM is predictive as well, having led the retrodiction testing in SRS error throughout the examined seasons. I allowed these details to simmer in my head for some time in case I was having some type of knee-jerk reaction to new information, but the points still stand tall and clear: EPM is currently unmatched.


    For the single-year list, I only made two significant changes. Because shooting is much noisier in smaller samples, such as a season or less, I’ll give the edge to Davis’s LA-RAPM over NPI RAPM. Additionally, PIPM’s response issues drop it down a peg for me in the one-year context. I did consider putting LEBRON over EPM due to its padding (EPM doesn’t employ any stabilization methods to my knowledge), but the tracking data leading to greater team independence is too large an advantage for me to overlook.

    Single-Season Rankings

    1. Estimated Plus/Minus
    2. LEBRON
    3. Real Plus/Minus
    4. RAPTOR
    5. Augmented Plus/Minus
    6. Backpicks BPM
    7. Basketball-Reference BPM
    8. Player Impact Plus/Minus
    9. Luck-adjusted RAPM
    10. NPI RAPM

    There was also some noise surrounding DARKO’s Daily Plus/Minus ranking on this list. I did evaluate the metric for this list despite its breaking of the criteria, simply to include it as it stacks up against the other metric in model strength. Based on the statistical model, I would rank it fifth on this list, bumping AuPM down a spot and slotting DPM right behind RAPTOR.

    To my surprise, some people saw DPM as the premier impact metric on the market today. Some questioning led back to DPM’s developer’s, Kostya Medvedovsky, game-level retrodiction testings during the tracking era, which saw DPM lead all metrics. However, DPM is specifically designed to act as a predictive metric, giving it an unjustified advantage in these types of settings. Based on how I “expect” it would perform in descriptive duties based on the construction of its model, I don’t really see an argument for it cracking the inner circle (the Final Four).


    Thanks for reading everyone! Leave your thoughts in the comments on these metrics. Which is your favorite? And which one do you believe is the best?


  • The Best Regular-Season Teams of 2021 Per Oliver’s Four Factors

    The Best Regular-Season Teams of 2021 Per Oliver’s Four Factors

    (? The Ringer)

    About a year ago, I trained Dean Oliver’s four factors (AKA the outcomes for a possession) on offensive and defensive ratings from 1974 to 2019 to estimate the per-100 performance of a team. The results were extremely promising (R^2 of 0.99 with no heteroscedasticity), so it’s fair to say most teams will be well-represented by this method. I’ve decided to return to this system at the closure of the 2021 regular season to look at how the model viewed teams this season!

    The Method

    As stated earlier, Oliver’s four factors are extremely predictive of how well a team performances in a given season. (This is expected because the outcome of a possession greatly influences how many points are scored and allowed.) The 2021 results are out-of-sample; the training data covered almost every season since the initial recording of the four-factor statistics, with a smaller sample validating the results. The model itself passed the standard criteria for use as an OLS regression model, with the final product being split into offensive and defensive components.

    The Results

    The 10 Best Teams:

    1. Utah Jazz (+7.5 Net)
    2. Philadelphia 76ers (+5.8 Net)
    3. Milwaukee Bucks (+5.5 Net)
    4. LA Clippers (+5.2 Net)
    5. Brooklyn Nets (+5.1 Net)
    6. Phoenix Suns (+5.0 Net)
    7. Denver Nuggets (+3.6 Net)
    8. Los Angeles Lakers (+2.9 Net)
    9. Dallas Mavericks (+2.4 Net)
    10. Atlanta Hawks (+2.3 Net)

    Unsurprisingly, the Utah Jazz remain the best team in this model, but with a significant error between their actual Net Rating and predicted Net Rating, straying 1.8 points away from the real mark of +9.3 Net. The Sixers sneak up from fifth to second here, with a +0.2 point overestimation ranking them as the Eastern Conference’s regular-season champion.

    The 10 Best Offenses:

    1. Brooklyn Nets (117.9 ORtg)
    2. Milwaukee Bucks (116.6 ORtg)
    3. Portland Trail Blazers (116.5 ORtg)
    4. Utah Jazz (116.2 ORtg)
    5. LA Clippers (116.1 ORtg)
    6. Phoenix Suns (116.1 ORtg)
    7. Denver Nuggets (115.7 ORtg)
    8. Dallas Mavericks (115.1 ORtg)
    9. Atlanta Hawks (114.8 ORtg)
    10. New Orleans Pelicans (113.6 ORtg)

    This mostly resembles the actual offensive rating leaderboard, with another Eastern Conference stand-out popping up to #2 and teams like the Pelicans narrowly stepping into the top-10 over the Philadelphia 76ers and their 113.1 predicted offensive rating.

    The 10 Best Defenses:

    1. Los Angeles Lakers (107.2 DRtg)
    2. Philadelphia 76ers (107.3 DRtg)
    3. New York Knicks (108.3 DRtg)
    4. Utah Jazz (108.7 DRtg)
    5. Golden State Warriors (109.9 DRtg)
    6. Miami Heat (110.2 DRtg)
    7. Memphis Grizzlies (110.4 DRtg)
    8. LA Clippers (110.9 DRtg)
    9. Milwaukee Bucks (111.1 DRtg)
    10. Phoenix Suns (111.1 DRtg)

    There are no real surprises in the defensive component, which posts the same ten teams in a slightly different order. As is with its offensive half, the model found floor efficiency and turnover rates were far more predictive of a team’s success than rebounding rate of FT/FGA, so the results may be a tad biased toward teams who contest well and induce a lot of turnovers.

    Discussion

    Because this used an ordinary least squares approach, each of the four factors was valued on how they correlated to teams’ actual Net Ratings. Using the correlation coefficients of each factor, we could estimate each statistic’s relative importance:

    to Offensive Rating

    • Efficiency (68%)
    • Turnovers (28%)
    • Rebounding (0%)
    • FT/FGA (1%)

    to Defensive Rating

    • Efficiency (74%)
    • Turnovers (19%)
    • Rebounding (0%)
    • FT/FGA (1%)

    Efficiency from the floor is the “most important” statistic, and this passes our intuitive checks, as field goals are the most frequent offensive action to end a possession in the sport. Turnovers happen about 10-15 times a game, so its presence is made clear despite a lesser occurrence. Rebounding is interesting. Because offensive rebounding is significantly less common than defensive rebounding, its importance won’t be really high. But this model doesn’t seem to think much of it at all. Because events like rebounding rates and free-throw percentages are more clustered and arguably have a lesser impact on the eventual outcome of a possession, they far less important relative to the other outcomes.

    I posted an interactive leaderboard for teams here, which covers each team and goes into more detail on error rates and which teams were over or under-represented.


  • 5 NBA Thoughts [#2]

    5 NBA Thoughts [#2]

    (? The Ringer)

    Inspired by the likes of Sports IllustratedSB Nation, and Thinking Basketball among countless others, I’m continuing here a “5 Thoughts” series. Watching and studying the NBA means there’s a constant jumble of basketball thoughts in my head. However, I rarely touch on them in detailed articles or videos, so that’s where this series comes in: a rough, sporadic overview of five thoughts on the NBA!

    1. Bradley Beal and the OPOY

    During a live chat discussion on Halftime Hoops, a surging topic was whether the league should implement an Offensive Player of the Year (OPOY) Award to counterbalance the defense-only counterpart. A contribution to the topic was that the OPOY Award would too closely resemble the scoring title leaderboard, stating players like Bradley Beal and Stephen Curry will inevitably win because of their scoring, so why bother creating a new award to recognize a pool we’re already aware of?

    The voter tendencies could support this claim; in other words, the patterns of the voters suggest scoring is the largest factor in offensive evaluation, so in a practical sense, the opinion makes sense. To me, the more pressing question here is what that says about voter criteria and how it influences the public eye in general. There’s always been a misstep in how offense is treated: teams win games by scoring more points, so the highest-volume scorers are often the best offensive players. The poor connection is here the exclusion of other skills; namely, shot creation, passing, screening, cutting, floor-spacing, and all the crucial qualities that go into building a high-level offensive team.

    Bradley Beal isn’t in the conversation as a top-5 offensive player in the league to me, and while that’s mainly in part to a lack of eliteness in multiple areas that would bolster his overall value on offense, his scoring doesn’t pop out as much to me as it does other. Efficiency is generally treated as a luxury piece, and that volume scorers with “good enough” efficiency (within a reasonable range of league-average) shouldn’t be penalized. However, an argument could be made to flip the script entirely: efficiency may be equally, if not more, important than volume.

    Beal is currently fourth in the league in scoring rate, clocking in at 30.3 points per 75 possessions. (That also brings up the issue with the scoring title being assigned to the points-per-game leader: it erroneously assigns credit to high-minute players on fast-paced teams.) Thus, there are too many hindrances in placing a heavy amount of stock in how many points a player scores each game. System design, quality of teammates, the plethora of other skills that amplify a team’s offense: all evidence points toward an OPOY Award being an entirely separate entity from the scoring title.

    2. Making teammates better is a myth

    Raise your hand If you’ve ever seen someone argue on behalf of one player over another because “he makes his teammates better.” (If you aren’t raising your hand right now, I’d be very surprised.) High-quality facilitators are often treated in this fashion, and the cause for belief isn’t way off track: When elite shot creators and passers are setting up teammates for open field goals that would’ve otherwise been contested, the teammate’s shooting percentages will increase. Thus, the illusion that a player can make his teammates better was inspired.

    The problem with this idea is that it treats a player’s “goodness” as dependent on who’s on the floor with him. If an elite spot-up shooter is paired with an all-time black hole like Stephen Curry, the openness frequency of his shots will likely increase. Conversely, if he were playing alongside someone antithetical to this style like Elfrid Payton, his shooting efficiency would probably go down. The big question here is whether the increase in scoring efficacy should be attributed to the shooter or the playmaker. To me, the answer is fairly simple given how complex most of these problems can be.

    The teammate’s shooting in a vacuum is unaffected by his teammates. Namely, regardless of lineup combinations and synergies, the true skill of his shooting remains constant. A three-point specialist doesn’t go from a 40% to a 45% shooter overnight because he’s going to start playing with better teammates. Therefore, a “better” interpretation of these shooting improvements is that the playmaker is bettering the team’s shot selection (per-possession efficiency), but he’s not bettering the teammate’s shooting skill. This case applies to every other skill that creates a similar illusion.

    3. Rim protection vs. perimeter defense

    After spending a good amount of time in the aforementioned Halftime live chat rooms, it’s become increasingly clear that a large chunk of NBA fanatics may solely focus on man defense when evaluating this side of the ball. This directly parallels the strong focus on isolation scoring when the general public evaluates offensive skill. Just as we discussed in the first thought, these types of approaches can be extremely limiting and leave out a lot of information and skills that go toward building strong team defenses. This creates an ongoing conflict between weighing the value of rim protection and general “perimeter” defense, so which one is better?

    A recurring theme that has followed with basketball analysis’s anti-education culture is the tendency to try and solve complex problems with the simplest solutions possible. (This led to the original reliance on the box-and-record MVP approach.) So when defensive skills outside of isolation defense and shot-blocking are brought up, they might be met with some reluctance. The opposing style will be used here: the most important piece of the premise here is that rim protection isn’t just limited to shot-blocking ability, as perimeter defense would be to “locking up” an opponent’s best perimeter player. The cause-and-effect relationships these skills have on the totality of defensive possessions can be much stronger than meets the eye.

    Strong rim protectors in the paint can drastically alter game-planning against the opposition, meaning an increased focus on a perimeter attack can deter a large number of field-goal attempts at the rim, thus taking away the sport’s (generally) most efficient shots. Similarly, perimeter defense doesn’t just refer to a player’s one-on-one defense. How do they guard the pick-and-roll? How do they maneuver around screens (and perhaps more importantly, ball screens)? Do they make strong rotations and anticipate ball movement well or is there some hesitation? (These are just a few examples. As stated earlier, the sport is much too complex to be solved through an ensemble of steals and blocks.)

    The simple answer is that it depends. A lot of people will say rim protection is generally more valuable, and in a broader sense, I would agree. The (generally) most efficient shots are taken in the paint, so to either contest, deflect, or deter these shots would certainly seem to have a greater positive effect on the team’s overall defensive performance. However, as the “truer” answer suggests, most of that value is dependent on the context of the game. During the Playoffs, we have seen a league-wide trend of non-versatile centers losing a certain degree of impact against certain postseason opponents. (I’ve discussed this a lot in recent weeks.) Versus great-shooting teams with a strong midrange arsenal, perimeter defenders will likely see an increase in value: defending some of the league’s best floor-raising offense (open midrange shots from elite midrange shooters) will put a cap on the opposition’s offensive ceiling.

    A total discussion wouldn’t even come close to ending here, but with this brief overview paired with other in-depth analysis, there are pretty clear indicators that rim protection will often be more valuable than perimeter defense. This explains why players like Rudy Gobert and Myles Turner will generally be viewed as stronger positive-impact defenders than ones like Ben Simmons and Marcus Smart. However, with the increasing rise in outside shooting efficacy and offensive development, defensive versatility ranging from multiple areas on the court may become more valuable than either of them.

    4. Building an RAPM prior

    Regularized Adjusted Plus/Minus is an all-in-one metric that estimates the effect of a player’s presence on his team’s point differential. I’ve written about the calculations behind RAPM before, but a topic I’ve never covered in-depth is how “priors” are constructed. For those unfamiliar with these processes, RAPM uses a ridge regression (a form of Tikhonov regularization) that reduces the variability among possible APM values, regressing scores toward average to minimize the higher variance in “regular” Adjusted Plus/Minus. A “prior” is used in place of an average (net-zero) to pair with pure APM calculations.

    Priors are often referred to as “Bayesian priors” because they align with the philosophy in Bayesian statistics that encourage pairing the approximations of unknown parameters with outside information known to hold value in the holistic solutions of these problems. The basketball equivalent to this would usually see APM regress toward the box score and past results, among others. Current examples include Real Plus/Minus, which regresses toward the box score. BBall-Index‘s LEBRON regresses toward PIPM’s box component. Estimated Plus/Minus regresses toward an RAPM estimator that uses the box score and tracking data (which the NBA started to track league-wide in the 2013-14 season).

    The composition of these priors has stood out to me, and during my research for an upcoming video podcast on ranking impact metrics, it become apparent the use of on-off ratings was often a significant hindrance to a metric’s quality. It’s long been clear the extreme variability of on-off data could limit a metric’s descriptive power in smaller samples because of its lack of adjusting for hot streaks, lineup combinations, and minute staggerings. These confounding variables can make an MVP-level player appear to be nothing more than a rotational piece, and vice versa. Because there’s simply too much on-off data doesn’t account for, even as the skill curves begin to smooth, it’s probably on a downward trajectory in its usage among modern impact metrics.

    Conversely, the more promising form of statistics seems to be tracking data. Although most of these stats weren’t tracked prior to the 2014 season, there’s a good amount of non-box data from resources like PBPStats, which could help in creating a prior-informed RAPM model that extends back to the beginning of the twenty-first century. These types of stats are estimated to have the least dependence on team circumstances among the “big” three that includes box scores and on-off ratings, which gives tracking data an inherent advantage. During his retrodiction testings of several high-quality NBA impact metrics, Taylor Snarr (creator of EPM) found evidence to suggest this pattern, and although this phenomenon hasn’t quite yet hit the mainstream, I expect the next generation of advanced stats is powered by tracking data.

    5. The statistical evaluation of defense

    Compared to the offensive side of the game, defense has always been a bit of a black box. It’s much harder to evaluate and even harder to quantify. (When the DPOY Award was instituted in the early 1980s, Magic Johnson would receive a good share of votes.) This is an even larger problem given the limitations of the current box score, which only extends to defensive rebounds, steals, blocks, and personal fouls. The majority of attentive NBA watchers knows the box score doesn’t track a lot of defensive information, so what statistical methods can be used to evaluate defense?

    One of the up-and-coming statistics in recent years has been opponent field-goal percentage; the proportion of defended field-goal attempts that were made. These marks are seen as especially useful by a community that mostly considers isolation defense; but as a measure of the quality of contests alone, these stats are often misused. Some will say a defender played exceptionally well against LeBron James because, in the two’s interactions, James made two of seven shots. (“Therefore, Andre Iguodala is the Finals MVP,” if you catch my drift.) Not only are these measures not a definitive representation of how well a defender contested a player’s shots (location and team positioning are big factors that affect these stats), but small samples are perennially mistreated.

    Anyone with some familiarity with statistical analysis knows the variability of a sampling distribution very often goes down as the sample size increases. (The common benchmark is n = 30 to satisfy a “Normal” condition for some types of stat tests.) Thus, when Kevin Durant shoots 21% on 12 attempts in a game versus Alex Caruso, it doesn’t mean Alex Caruso is therefore Kevin Durant’s kryptonite. The variability of opponent field-goal percentage, the lack of context surrounding the conditions of a field-goal attempt, and the infrequent instances in which they occur create a statistic that’s easy to abuse. If there should be one major takeaway from this post, it would be to KNOW AND UNDERSTAND YOUR DATA.

    Unfortunately, a lot of lower-variation and more descriptive defensive stats (defended FG% doesn’t confirm a cause-and-effect relationship) aren’t readily available to the public, but it’s worth noting some of the advancements in the subject. A more common statistic might be deflections, charges drawn, or total offensive fouls drawn (the latter two paired with steals created total forced turnovers). However, the most exemplary case of defensive measuring I’ve seen is through BBall-Index, whose data package comes with a multitude of high-quality defensive statistics, including box-out rates. rebounding “success rate,” loose ball recovery rate, rim deterrence, matchup data, among countless others. Seriously, if you’re looking to take your defensive statistical analysis up a notch, it’s the real deal.

    Impact metrics will always provide their two cents with their one-number estimates, and for the most part, they’ll be pretty good ballparking values. But intensive film studies (the building blocks for evaluating defense) may need to be paired with descriptive, intelligible defensive statistics to enhance the quality of analysis. This doesn’t make defense any easier to evaluate than offense, but relative to the alternative (steals + blocks), it’s a whole lot better.

    How important are skills like volume scoring and isolation defense? Are the right players credited with individual statistical increases? And how are the futures of RAPM and defensive measurements coming into play right? Leave your thoughts in the comments.


  • 5 NBA Thoughts [#1]

    5 NBA Thoughts [#1]

    (? The Ringer)

    Inspired by the likes of Sports IllustratedSB Nation, and Thinking Basketball among countless others, I’m introducing here a “5 Thoughts” series. Watching and studying the NBA means there’s a constant jumble of basketball thoughts in my head. However, I rarely touch on them in detailed articles or videos, so that’s where this series comes in: a rough, sporadic overview of five thoughts on the NBA!

    1. Curry’s offense vs. Jokic’s offense

    At this point, it’s safe to say the league’s two-best offensive players are Steph Curry and Nikola Jokic. But between the two, who’s better? The Playoffs will inevitably shed more light on this (while our knowledge of their postseason performances in the past is a positive indicator for Jokic), but with Jokic’s ascension, we may have enough to decipher a good amount of their skillsets.

    Curry functions in a high-movement offense that largely relies on finding him and open shot. That’s done primarily through Curry’s dynamic movement off the ball, darting through screens and coming up on pin downs, eventually culminating in an open jumper behind a ball screen. This style of offensive play may be considered a bit high-maintenance, but the potential it has to unclog other areas on the court (namely, the paint) is astronomical. Curry’s gravity was the main catalyst for a lot of Kevin Durant’s scoring bursts during the Playoffs, and that makes Curry one of the most adaptable offensive stars in the history of the game.

    Jokic is a passing savant with an off-ball repertoire of his own. His elbow game is perfectly designed for another style of high-activity offense. Rather than a long series of screening action, Denver uses an elaborate design of cutting to either unclog the paint or the corners, AKA prime real estate for Jokic’s passes and assists. The Nuggets’ offensive attack is so deadly in the half-court because of how distracting their cutters are. Defenses either collapse into the paint (leaving the main in the corner open) or maintain space (spacing out the paint). Pair that with Jokic’s midrange shooting, and you have the most electrifying team offenses in the game today.

    So which offensive game is better? Mentally projecting the Playoffs, I expect Jokic will be harder to gameplan against with his ability to pass out of traps inside the three-point line. (And those outlet passes are so phenomenal, half-court offense could be a contingency at times.) I don’t see Curry as poor in this regard, but his height definitely caps the ceiling in these types of possessions. I think they both fit alongside other star talents really, really well. I see Curry as the more “scalable” player considering Jokic’s defensive shortcomings and the increased difficulty in building a championship-level defense around a neutral-impact center.

    It’ll be extremely difficult to choose between Curry’s historical scoring and gravity and Jokic’s enigmatic passing and well-roundedness. I couldn’t sell myself on either one of them at this stage, but given out previous knowledge relating to their Playoffs adaptations, my have-to-make-a pick is Jokic.

    2. Is Harden being exposed in Brooklyn?

    I recently got around to eyeing Harden’s stint with the Brooklyn Nets, and one overarching thought put some concerns in my mind. During his day with the Rockets, he mastered spread pick-and-roll. He could create a ton of offense out of these spots with his stepback jumper, strength and finishing, and high-level passing and kick-outs, and this served as the primary source for his impact on offense. Now, if the last few seasons have taught us anything about Harden, a major sticking point has been that he provides nothing without the ball in his hands, and that may be a problem in Brooklyn.

    Because Harden absolved a decent amount of Durant’s and Kyrie Irving’s touches upon his arrival (refer to this graphic I made for a previous article on the Nets), his value in composite metrics and the box score seems fairly comparable to his level of play in Houston.

    A downward rate of change in their situational value for all three offensive stars is expected, but I wonder if there’s more cause for worry in the Playoffs. This is clearly a Harden-led offense, but his main offensive sets seem to have lost their big bite. Relative to his spread pick-and-roll possessions in Houston that I’ve watched this season, the Brooklyn counterparts seem like a notable downgrade. His handle has loosened and more point-of-attack defenders are poking the ball loose, fizzling out a lot of the action, and the aim and speed on his passes seem to have lost their touch. This may be a sampling issue, but the degree to which these plays declined seemed significant.

    As I let this thought simmer in my head for a while, I began to wonder if this was a game flow problem. With the Rockets, Harden could essentially call for these isolation possessions at will. But with the Nets, he has to concede a lot of those opportunities to Durant and Irving, which led me to believe the halt in constant ball-pounding left Harden in a bit of a funk. Does Harden absolutely need the ball in his hands throughout the game to exhibit his world-class value? Perhaps so, and perhaps not. Either way, this would indicate a really low ceiling on Harden’s fit alongside other perimeter isolationists.

    3. Are the Knicks legit?

    The New York Knicks have made one of the most sudden and unexpected team improvements in recent history. A season removed from a woeful -6.72 SRS and a seventh consecutive Playoff miss, they’ve made the astronomical leap to +1.90 through their 67th game of the season, very likely snapping their Playoff drought. (They did it for you, little Cody.)

    The 2021 season has been filled with unprecedented levels of confoundment (a trend that’s carried over from the 2020 bubble), and it’s made it a lot harder to separate lucky stints from tangible improvement. To me, that’s the biggest question with the Knicks. Are they a mere product of luck combined with a mild talent boost or one of the most improved teams in recent history?

    At this stage, it’s clear New York is thriving because of its team defense. At the time of this writing, their -1.5 relative Offensive Rating sits twentieth while their -3.7 relative Defensive Rating currently ranks as the fourth-best mark in the NBA. Because scoring and allowing points determine the winners of the games, the causal variables behind these ratings are crucial to simplifying the aforementioned “luck versus improvement” problem. The largest components of this will involve opponent free-throw shooting (as that’s something a team defense has no control over) and three-point shooting: the golden standard for noisy stats. To put this into perspective, take this quote from a Nylon Calculus study four years ago:

    “Let me begin this with some research background: open 3-point percentage is virtually all noise, and open three’s consist of a huge portion of the 3-pointers teams allow. There’s no pattern to individual 3-point percentage defense either — it’s noise too . Ken Pomeroy found that defenses had little control over opponent 3-pointer percentage in the NCAA as well.”

    From an intuitive perspective, this makes sense, especially on open three-point attempts. At that point, with no defender to dilute the quality of the shot, whether or not the shot is made is determined by the staggering variance in the bodily mechanics of the shot, which is influenced by factors almost entirely outside of the defense’s scope. Thus, these types of statistics will be used to ballpark the difference between shooting luck and systematic upgrades in the Knicks’ hardware.

    After looking at New York’s current statistical profile, I figured a lot of the data could be reasonably interpreted as noise. Last season, the Knicks shot 33.7% from three, ranking 27th in the whole league. That number has made the seemingly unparalleled jump to 39%, the fifth-best three-point percentage in the league. At the player level, there are a few data points that stood out in how drastic three-point percentage improvements were:

    • Julius Randle: 27.7% in 2020 | 41.7% in 2021
    • RJ Barrett: 32.0% in 2020 | 39.9% in 2021
    • Derrick Rose: 30.6% in 2020 | 38.3% in 2021
    • Kevin Knox: 32.7% in 2020 | 39.3% in 2021

    The Julius Randle uptick is the most prominent, not only because it was the largest change among moderate-volume shooters, but because he’s taken an extra 1.4 three-point attempts every 75 possessions compared to last season. Not only that, but BBall-Index‘s perimeter shooting data suggests Randle is taking pretty difficult shots (the 7th percentile in openness rating and the 2nd percentile in three-point shot quality). This is certainly good news for Randle and his MIP case, but it makes our questions about the Knicks even more difficult to answer.

    An even larger sticking point for me was how efficient their opponents were from three-point range. At the moment, this stands at 33.7%, good for the best opponent three-point percentage in the league. I like to use a ballparking tool in these types of scenarios by replacing the number of points the team limited in reality with a league-average result. So, for example, the Knicks’ opponents have shot 36.9 three-point attempts per 100 possessions at 33.9%, which means opponents generated 37.5 points every 100 possession from their perimeter shooting. A league-average percentage would have generated 40.6 points from these attempts in the allotted period.

    Namely, if the Knicks’ opponents were average three-point shooters, their new defensive rating would suggest they allow 111.7 points per 100, which would not be too far off from last season and would make the Knicks a net-negative team. If we use the same technique for their offense to estimate the effects of shooting luck (especially in this fan-free, short-schedule environment that’s seen a major spike in offensive efficacy), their new offensive rating would fall to 108.7. Using this stricter method, the Knicks would be a -3 team. I don’t think they’re quite that poor. There are positive signals for offensive shooting improvement and the defensive mind of Tom Thibodeau has really helped this team. But if I had to give my best guess, the Knicks are roughly an average team in 2021.

    4. The potential of Box Plus/Minus

    Box Plus/Minus (BPM) is perhaps the most popular plus-minus metric on the market aside from Plus/Minus itself. You don’t need to know anything about the metric other than its name to infer it’s calculated with the box score alone. Interestingly enough, a public that often doesn’t extend its thinking outside the box score will often criticize BPM for only using the box score. And that’s what I’ll discuss today, the “potential” of Box Plus/Minus.

    I’ve recently propagated the truest value from impact metrics comes from their ability to help us understand not necessarily who impacts the game most, but how much varying qualities of players can impact the game. BPM serves that purpose as well as any other one-number metric, but with an extra kick. Since BPM relies on counting stats alone, some of which quantify levels of skill, we can break down a player’s statistical profile to see which tendencies make up the bulk of his impact. More recent studies include Ben Taylor’s creations of ScoreVal and PlayVal, which estimate the plus-minus impact of scoring and playmaking as components of the Backpicks Box Plus/Minus model. Hybrid and APM metrics, which blend plus-minus data directly into the equation, can’t provide these types of analysis, giving BPM a grounded place among the major impact metrics.

    This is even more useful in career forecastings. While other impact metrics could use some type of role archetype extrapolated from its datasets or age to project careers, BPM allows for a much more granulated skills approach. For example, the Backpicks model uses shot creation and passing ability and teammate spacing and all other kinds of estimates based on the box score to help better the descriptive and predictive powers of the box-only metrics. So while some may see Box Plus/Minus as a dying art form, you could argue it’s actually ascending to its peak right now.

    5. A major analytics misconception

    For every analytical mind, there are five contrarians to oppose them. For the most part (emphasis on “the most part”), these oppositions don’t really know what they’re fighting against other than some vague advancement of NBA statistics. Due to this lopsided ignorance, there stem a lot of misconceptions as to how an analytical supporter interprets and use advanced stats.

    My experience conversing with the anti-analytics crowd has led me to believe they believe an analytical person will treat advanced stats as gospel, and that film study is some type of basketball sin that defies the analytics movement. The truth is really the exact opposite. Anyone who claims they can passively watch a game and then understand everything about it is overestimating themselves. It’s a simple function of cognition; we can neither track nor understand every single thing that’s happening on the court with our eyes alone. That’s where analytics comes in. (Because, honestly, if there weren’t a purpose for them, no one would have bothered to create advanced stats!)

    And that’s what I would like the major takeaway here to be. As someone who identifies as an analytics supporter, I can advocate on behalf of “us” and say the public eye is mostly wrong. Advanced stats aren’t treated as gospel. They’re estimates with varying error rates. Sometimes, advanced stats don’t capture players well at all! That’s why the fluidity of analytics and film is the actual driving force behind the analytics movement. As is with any other field, change causes pushback, and not everyone will be willing to evolve. But that’s the whole point of analytics, of advanced stats. They’re advancements.

    Who’s the better offensive player between Steph Curry and Nikola Jokic? Is Harden not serving the purpose in Brooklyn we’d all thought he would? Are the Knicks what their record says they are? Is Box Plus/Minus a dying breed? And how do analytical minds interpret advanced stats? Leave your thoughts in the comments.


  • Modeling NBA Scoring Proficiency

    Modeling NBA Scoring Proficiency

    (? The Ringer)

    The concept that diminishing returns on efficiency accompany an increase in scoring attempts has long existed, yet very few public models are available to showcase this. Recently, I tinkered with data from Basketball-Reference to estimate the effects of the context of a player’s shot selection with his shooting efficiency to create a few new statistics to help move the needle in quantifying these alternate approaches to “scoring proficiency.”

    The Goal

    With this project, I had one overarching goal in mind: to estimate the number of points a player scored “over expectation” based on the distances of his field-goal attempts, whether or not the shot was assisted (i.e. whether or not he is “creating” his own shots), and how often he shoots. This would hopefully take out some of the noise in pure rate stats like Effective Field-Goal Percentage and True Shooting to identify the most “proficient” scorers based on the context of his shooting attempts.

    The Method

    The first step in calculating this “Points Over Expectation” statistic is looking at how far away a player’s field-goal attempts were from the hoop. Using BBR as the data source, this split the court into seven different zones:

    • 0-3 feet from basket.
    • 3-10 feet from basket.
    • 10-16 feet from basket.
    • 16 feet from basket to 3P line.
    • Above-the-break 3P.
    • Corner 3P.
    • Heaves.

    The first building block to measuring scoring proficiency is comparing a given player’s efficiency and volume in these zones to league-average expectations and estimating a “Points Above Average” of sorts. For example, Luka Doncic has taken 215 attempts (through 4/21) within three feet of the hoop and made them at a 70.7% rate, which is slightly over 3% better than league-average; so based on the volume of his attempts, he “added” an estimated 14 points from this range. The process is repeated for all seven ranges, looking at how often and how efficient a player is from different zones on the court and comparing them to the expected output of an “average” player.

    To add some additional context to a player’s shot selection and produce more accurate results, there are two regressions incorporated here:

    • Efficiency based on how frequently a player’s field goals are assisted.
    • Efficiency based on how often a player shoots from the field.

    The firstmost regression occurs first, which looks at league-wide trends that estimate how efficiently a player will score based on how much help he would receive from his teammates. The results showed a significant positive trend between the two statistics. Namely, the more a player’s field goals are assisted, the more efficient he’s expected to score. The “PAA” results are adjusted to this context accordingly.

    The second regression is incorporated next. This repeats the same process for “shooting help,” but instead looks at location-adjusted efficiency compared to shooting volume, measured in total field-goal attempts. The results from this also showed a distinct negative relationship between efficiency and volume; the more a player shoots, the less efficient he’ll become. The results from the previous regression are then fitted to these data points.

    The Results

    I calculated the scores for every NBA player in the 2021 season through April 21st, the spreadsheet to which can be found here. First glancing at the scores for 2021, the player who immediately popped up was Luka Doncic leading the NBA with 285.3 “Points Over Expectation.” He’s certainly not the best scorer in the league, so what’s going on here? The approach this model takes love how often Doncic creates his own attempts; 16% of his field goals were the products of assists. He also shoots the ball a lot, then standing fifth among all players in field-goal attempts.

    Because of how the model works, the results will be slanted towards certain playstyles that demand the following:

    • Players who receive little “help.”
    • Players who shoot a lot.

    This confounds results for two big reasons: 1) The regressions used to model “luck” aren’t perfect measurements; in other words, there will be some level of variance with how players are rewarded or not depending on the adjustment factors the model uses. 2) Not all shot profiles are created equally. This means different players would, over thousands and thousands of chances, see varying changes in their efficiency based on help and volume. The above regressions use a “best fit” to estimate this change, but this means there will sometimes be large errors or outliers.

    The major takeaway here is that these results are mere estimates of a player’s scoring proficiency, not definitive measures. Because a heap of evidence shows Luka Doncic isn’t the NBA’s best scorer, we know we can treat his datapoint as the product of an error based on the calculations of the model.

    Discussion

    Although the primary goal of this project was the “POE” statistic, there were some other neat results that could be produced based on the data going into the calculations. To the right of the POE column in the spreadsheet is a column labeled “cFG%”, which stands for “creation-adjusted Effective Field-Goal %.” This simply converts POE to a rate stat (adjusted efficiency per shot) and converts it to an eFG% scale, meaning the league-average cFG% will always be set to the league-average eFG%. This acts as a brief perspective on adjusted efficiency on a more familiar scale, but one that gives some more leniency to lower-volume scorers.

    To the right of that is “xFG%,” which stands for “expected Effective Field-Goal %.” Because a player’s shot profile was the driver behind the main metric, the locations could also be used to determine how efficient a player is “expected” to be based on where he shoots the ball. This counterpart doesn’t look at the proportion of shots that are assisted or shooting volume, instead being based purely on location.

    So does this stat really measure the best scoring in the league? Of course not. There are a few kinks that can’t be shaken out; but for the most part, I hope this acts as a comprehensive and accessible measure to look at the effect of a player’s shot profile on how efficiently he scores, and how this influences the landscape of current NBA scoring.


  • The NBA MVP Voter Criteria is Deeply Flawed (Opinion Piece)

    The NBA MVP Voter Criteria is Deeply Flawed (Opinion Piece)

    (? Business Insider)

    Recently, I’ve scoured the internet for a clear and qualified description of what constitutes the definition of the NBA’s MVP Award, and unfortunately, these attempts have been fruitless. Perhaps this was done intentionally so that the ideas behind “value” could extend beyond thinking in the mass, but that still doesn’t stop people front searching for a universal criterion that can act as a “correct” interpretation of the “most valuable” player. However, in these efforts, there seems to be a widespread misunderstanding of what makes a player valuable. This post is admittedly and entirely an opinion piece, so while none of these ideas go without saying, I also won’t say the current state of the MVP voting rationale is without brokenness and deep flaw, and this is why.

    Current Voter Tendencies

    As discussed by a multitude of blogs, podcasts, and video before this, there are three main factors that go into how the voters will generally approach casting their MVP ballots:

    • Team success
    • Individual statistics
    • Narratives

    As evident from talk shows like ESPN‘s “First Take,” it’s not uncommon to see the “best player on the best team” notion thrown around. Namely, some will say the MVP is the best player on the best team; and from what I can gather, it’s because the “best” player on the “best” team is supposedly impacting the ability to win at the highest level. While I’m diametrically opposed to this idea, and I’ll explain why later, it’s an undeniable factor in media voting.

    Aside from knowing which teams are really good (which, in this case, takes a quick glance at the standings and nothing else), there also has to be some way to recognize the “best” players on those teams. This is where the box score comes in. While traditional box score statistics seem to be the most telling indicators of MVP winners (the historical MVP results that fueled Basketball-Reference‘s MVP tracking model found that points, rebounds, and assists per game were three of the four signaling variables in predicting voting outcomes, alongside team record), I will define this branch as more whole due to sparser but present references to advanced statistics like PER, Win Shares, etc.

    Narratives are perhaps a less decisive, but still influential, part of the equation. Because it’s difficult to tell exactly how much impact these have on voting, we do have the examples of the noise that surrounded Kobe Bryant in 2013 and LeBron James in 2020. Both of these players were approaching their twilight years and, due to their ages, were garnering much more praise in major media outlets. Among others is Russell Westbrook having averaged a triple-double in the 2017 season. Although this is more of a statistics-driven case, there was a widespread significance to these numbers as Westbrook would be breaking a record set by Oscar Robertson back in 1962, so there were still strong hints of story-telling in this instance.

    What does “value” mean?

    Because the MVP is an acronym for the “Most Valuable Player,” it makes sense to vote for the award as it’s defined and choose the “most valuable” player; but what does that really mean? This, of course, means value needs to be defined. Even in basketball nomenclature, “value” is a loosely defined concept, which often leads to lots of dissenting opinions. Most recently, I’ve seen these types of discussions in the comment sections of MVP ladders and a delicately placed one of my own that I issued recently. Let’s look at some of the interpretations expressed in these forums:

    • “It’s flawed but the logic behind it is that if you can’t lift your team to a top seed, you’re not impacting winning at an MVP level.”

    (This quote came from someone with a seemingly dissenting opinion, hence an “it’s flawed” beginning.) The logic outlined here suggests, for a player’s value to be validated, it has to materialize at the team level. But instead of that being a show of high lift, or larger differences in win pace or scoring margin with and without a player, it has to be the player’s team’s win percentage. This has been a concept I’ve struggled with for a long time, and that’s because it parallels a phenomenon I discussed in an earlier criticism of the work of the Instagram video podcast, “Ball Don’t Stop.”

    Without going into the nitty-gritty of this event, the podcast’s host drew an unsound connection between scoring on the player level and scoring at the team level. Namely, there are many more ways other than scoring through which a player can positively impact his team’s point differential. The same logic applies to the improper connection between winning at the player level and winning at the team level. (Winning “at the player level” would simply be represented by a hypothetical parameter of exactly how many wins a player contributes to his team.) All it takes is a damning example in recent history to disprove this: Anthony Davis in New Orleans. From the 2015 to 2019 seasons, he was arguably one of the ten-or-so best players in the league, yet his team only managed to barely surmount the “average” mark (in SRS) two of those five years.

    But does that mean, because the Pelicans were a sub .500 team, Davis’s ability to positively impact an NBA team is invalidated? By no means is this true. I’m not one to throw around impact metrics without attempting to make some adjustments for confoundment, but between those five seasons Davis clocked in at no less than 7.2 Win Shares in a season, making him an extremely likely candidate for the title of a “valuable player.” Contrary to popular belief, there is a lot of historical analysis that suggests a player becomes more valuable to a team at it becomes worse. The premise is that because the team becomes more reliant on the player and his skills are being put to more use in that situation, the increased role would mean a team’s win pace, barring confoundment from variance and/or team circumstance, would actually be impacted to a higher degree by any player as the remaining roster’s quality is weakened.

    • “I think it’s [team quality in MVP cases] a pretty large factor. You can’t be on a bad team and be the MVP, that shows a lack of leadership, even if ur team is dogsh*t. It’s the ‘Most Valuable’, u might be the most valuable person on your team, but when your team is meaningless, you’re not exactly valuable.”

    This argument implies an axiomatic truth that states a player, if he so bears the value of an MVP-caliber player, must be able to transform the worst team possible into a “good team” (as it’s stated an MVP can’t be on a “bad team.”) Leadership attributes aside, let’s design a method to determine the probability of We know that the most extreme cases of player impact, based on records of with-or-without-you data (which measures the difference between the team’s schedule-adjusted margin of victory with and without a player from game-to-game) and APM data (estimates of a player’s impact on his team’s Net Rating, controlling for the quality of teammates and opponents), would say a GOAT-level player can add no more than +10 points to his team’s scoring margin each game; and even that measurement is quite generous.

    So if the Basketball Messiah set foot on the court, we would expect him to be worth about +10 points to his team per game. Because, as an individual player, he’s about to embark on the greatest peak season in league history, he “should” in this case be able to transform any team into a “good” team. The worst team in history per Basketball-Reference‘s SRS was the 1993 Mavericks, posting a woeful -14.68 SRS, surprisingly in a full 82-game season. So if this amazing player, we’ll call him Cornelius, is worth +10 points per game and his cohorts are worth (about) -14.7 points per game, would the new team be a -4.7 SRS team? Perhaps, but a significant factor we need to account for is trade-offs in roles and how these teammates will scale alongside Cornelius.

    Most superstars will play about 75 possessions per game in the modern era (roughly 36 minutes per game), but because Cornelius is so good, let’s say he’s a bit more of a heavy lifter and plays 40 minutes per game, playing just over 83 possessions per game. Because he’s on the floor for 83% of his team’s games (league-average paces generally tend toward 100 possessions per 48 minutes) and there are five players on the court at a time, we can estimate Cornelius carves out roughly 2.44 points of influence from his teammates, which in this case, would be -2.44 points per game. That means the additive efforts of his teammates now equate to a -12.24 SRS team. Therefore, with the addition of Cornelius’s +10 points per game, the new team is now a -2.24 SRS team. This would equate to a 35-win pace in an 82-game season.

    But we don’t have to stop there; we can continue exploring the possibility that Cornelius does, in fact, make this historically-bad team a good team through a significance test. Namely, we’re trying to determine if the -2.24 SRS estimate is convincing evidence that Cornelius doesn’t turn the previous roster into a good team. Without going into the nitty-gritty of how this hypothesis test works, here’s a takeaway from the final result:

    • Assuming Cornelius would improve the ’93 Mavericks to average levels, the probability they would have a -2.24 SRS is 30.47%.

    We can alter the parameters of these experiments to account for even more scenarios:

    • Assuming Cornelius would improve the ’93 Mavericks to the quality of an eighth-seed team, the probability they would have a -2.24 SRS is 31.81%.
    • Assuming Cornelius would improve the ’93 Mavericks to the quality of a championship contender, the probability they have would a -2.24 SRS is 4.92%.

    These probabilities are obviously quite low. To increase the leniency of these situations, let’s look at how some of today’s players might fare in the current MVP race by plopping him on the worst team in the NBA right now: the OKC Thunder. Based on the latest APM data, a reasonable higher fence for a stable form of impact from the game’s best player is +6 points per game. Using the method from earlier, this new player (we’ll call him “Player B”) would alleviate 1.43 points of the Thunder’s SRS deficit en route to a -1.16 SRS team. While the higher quality of the current Thunder compared to the ’93 Mavericks allows for the lesser player to help them attain greater heights, no player in today’s game could lead the Thunder to a win-pace greater than a 9-seed team.

    Using the probability method from earlier, let’s once again lay out some of the likelihoods for Player B:

    • Assuming Player B would improve the OKC Thunder to the quality of a Playoff team, the probability they would have a -1.16 SRS is 40.79%.
    • Assuming Player B would improve the OKC Thunder to the quality of an average team, the probability they would have a -1.16 SRS is 39.55%.
    • Assuming Player B would improve the OKC Thunder to the quality of a “good” team (one SRS standard deviation above average), the probability they would have a -1.16 SRS is 10.29%.
    • Assuming Player B would improve the OKC Thunder to the quality of a title contender, the probability they would have a -1.16 SRS is 7.97%.

    Now, let’s take the whole landscape of “good” and “bad” teams for some final statistics. For context, Player B lifted each “bad” team to an average -0.25 SRS and no higher than an 0.37 SRS:

    • Assuming Player B improves a currently-existing “bad” team to the quality of a “good” team is 14.53%, the probability the currently-existing “bad” teams continue with a “bad” SRS is 14.53%.
    • Assuming Player B improves a currently-existing “bad” team to the quality of a championship contender, the probability the currently-existing “bad” teams continue with a “bad” SRS is 11.53%.

    As we can see, the odds are not in Player B’s favor, despite being the very best player in the game. So what does this mean for this year’s MVP race? Aside from the likely candidates, there’s a heap of very valuable players that play for teams that aren’t particularly good, such as Luka Doncic, Nikola Vucevic, Zion Williamson, and even two of the strongest MVP candidates right now: Stephen Curry and Damian Lillard. But do the qualities of these players’ teammates make them any better or worse, any more or less valuable, of a player? All the evidence suggests not. Even the greatest imaginable player in league history wouldn’t improve the current OKC Thunder to a “good” team!

    The major takeaway from today’s study:

    • DO NOT JUDGE THE QUALITY OF A PLAYER’S MVP CASE BY HIS TEAM CIRCUMSTANCES. SERIOUSLY, DO NOT DO THIS.

    I’ve seen some rebuttals to reforming the NBA MVP voting criteria, both of which qualify as logical fallacies. The first is the Appeal to Authority: i.e. the voters think this way, therefore it’s the “right” way. But, of course, the fact that the voters may sway one way doesn’t confirm or deny any qualifications of what makes a player valuable. The second is the Fallacy of Sunken Costs, i.e. the voting has gone on this way for so long, so why change it? Namely, the continuation of this flawed criterion is for the attainment of some unsound form of achievement. But, as these arguments are heavily fallacious, why not spur change in your next conversation to actually change the topic to… the most valuable players?

    The heavy inclusion of team success / three-pillar system as a focal point behind a player’s MVP case is deeply flawed, fallacious, and a massive embarrassment to the intellectual integrity of NBA basketball. To see such a complex topic be dumbed down to such measly levels is atrocious to me, and I hope this post helped reinforce the understanding of why I feel this way.


  • How to Interpret NBA Impact Metrics [Video & Script]

    How to Interpret NBA Impact Metrics [Video & Script]

    NBA impact metrics, despite acting as some of the best tools available to the public today, are frequently misinterpreted. What are some ways to look at impact metric scores and use the data to our best advantage? Are there cases in which largely lower-score players are “better” than higher-score players? As a relatively avid user of these stats, I dive into the tips and tricks I’ve gained over the past few months on how to make the best use of these metrics.

    Video Script:

    Good afternoon everyone, and welcome back to another discussion on NBA impact metrics. So recently, I’ve been talking and writing about impact metrics more than usual, and that’s because there’s still some level of mystery surrounding them in the public eye. Anyone can hop onto Basketball-Reference and see that Russell Westbrook had a league-leading +11.1 Box Plus/Minus in 2017, but what does that actually mean? For the more dedicated NBA fan, these numbers might be seen every day through an endless series of acronyms of BPM, EPM, RPM, PIPM, RAPM, RAPTOR, or whichever metric that happens to be perused that day. Because of this, I’ve decided to sit down today and discuss the interpretations of these impact metrics, not only to set some of the records straight on what they truly mean but as an aid to make more reliable conclusions based on these types of data.

    Before we begin, if you missed my discussion of how impact metrics are calculated, feel free to scroll down to the description box; I’ve linked that article as a precursor to this video as some optional context going into this.

    With that out of the way, let’s go over the definition of an impact metric that I laid out in that article, which goes as follows: “Impact metrics are all-in-one measurements that estimate a player’s value to his team. Within the context we’ll be diving into here, a player’s impact will be portrayed as his effect on his team’s point differential every 100 possessions he’s on the floor.” So when we refer back to Russell Westbrook’s Box Plus/Minus in 2017, we know that metric says he provided an estimated +11.1 points of extra impact every 100 possessions he played. Now, this understanding may suffice to recognize a score, but what we’re looking for here is a strong interpretation of these metrics because, unless the nuances of each metric remain present in the mind, some of the inferences drawn from these data points will lead to much poorer interpretations of what’s happening on the basketball court.

    Let’s begin with some universal rules that can be applied to nearly every widespread metric. They only capture a player’s value to his specific team in his specific role. And if that doesn’t sound like a problem at first, consider this example. Arguably the top-2 NBA point guards in 2016 and 2017 were Stephen Curry and Russell Westbrook, and they both had the best seasons of their careers within these two seasons. Curry had his mega-spike in the 2015-16 season while Westbrook followed that MVP case with one of his own in 2017 when he averaged a triple-double. Let’s look at their scores in Box Plus/Minus during these seasons. When Curry had his peak, he posted an Offensive Box Plus/Minus of +10.3 and an overall score of +11.9, which to this day stands as one of the highest marks since the merger. The following season, he had a very impressive yet significantly lower +6.7 Offensive Box Plus/Minus and a +6.9 total score. So was Steph really 5 points worse in 2017 than he was in 2016? Of course not. This phenomenon was created when the addition of Kevin Durant absolved some of Curry’s role in the effort to help the two stars coexist on the court together. If the logistics behind that alone doesn’t make sense, just look at some of Curry’s box score stats when he played without Durant in 2017 compared to his bigger role in 2016.

    2017 (w/o KD): 39.5 PTS/100, 9.9 AST/100, 7.3 FTA/100

    2016: 42.5 PTS/100, 9.4 AST/100, 7.2 FTA/100

    The same events happened in reverse with Russell Westbrook, coincidentally enough because of the same player. With Durant in OKC in 2016, Westbrook had an Offensive BPM of +6.4 and an overall score of +7.8. After Durant left for Golden State that offseason, Westbrook posted an offensive score of +8.7 and an overall score of +11.1. So we ask again, was he really +3.3 points better a mere season apart? Of course, we answer again as we did with Curry: no. Westbrook simply took on a larger role; in fact, a much larger role. His usage rate increased a whopping TEN percent between the 2016 and 2017 seasons. That’s an unprecedented amount of change in such a short amount of time! Let’s use the same technique we did for Curry and compare Westbrook’s 2016 box scores when Durant was off the floor against his historically great solo act the following season.

    2016 (w/o KD): 40.7 PTS/100, 15.3 AST/100, 13.0 FTA/100

    2017: 44.8 PTS/100, 14.7 AST/100, 14.7 FTA/100

    The takeaway here is that impact metrics are extremely sensitive to a player’s role and only estimate what they’re doing in their specific situations. This means players who are poorly coached or are being assigned to a role that doesn’t fit their playstyle will always be underrated by these metrics while players who are utilized perfectly and play a role tailored to their style will always be a bit overrated. This type of significant confoundment will be found more often in box-only metrics than play-by-play informed ones; but star players on more top-heavy rosters will also see inflation in their on-off ratings, even after adjusting for some forms of luck.

    The next thing I’d like to discuss is how to interpret impact metrics in large groups. I’ve seen some claims that say one metric on its own isn’t entirely telling, but a healthy mix of a lot of metrics will be significantly more telling. Despite the fact that almost all notable one-number metrics attempt to estimate the same base measurement, RAPM, I still struggle with this idea; and that’s because every one of these metrics is created differently. Box Plus/Minus only uses the box score; PIPM uses the box score as well as luck-adjusted on-off ratings, RAPTOR uses both of those in addition to tracking data. I wouldn’t go so far as to call this an apples-to-oranges comparison, perhaps more along the lines of red apples to green apples. Averaging out five or so metrics might get closer to a true value, but it doesn’t necessarily move the needle as effectively as viewing each metric individually and considering the nuances. But I also won’t say this is entirely more useful, as these metrics still do use a lot of the same information. One form of confoundment for one metric will likely be present to some degree in another.

    The last main topic I’ll talk about here is how to interpret impact metrics within their sample size. At the time of this writing, NBA teams have played an average of 52 games so far, yet there have been cases of 50-game samples of these metrics treated just the same as a 250-game sample. This is where I’ll introduce the variance among these metrics. I’m a part of a biweekly MVP ladder ranking over at Discuss The Game, the profile to which I’ll also link in the description box, and in the discussion room filled by the panelists, I saw a lot of talk early on in the season that compared impact metric scores of the candidates. I only found this interesting because, as the panelist with arguably the highest usage of impact metrics in an overall sense, here I was the panelist with the least reliance on these stats. So why was this shift so significant? It paints a picture of how variance is often treated among basketball stats. NBA analyst Ben Taylor discusses “sample-size insensitivity” in his book, Thinking Basketball, which states most fans will often not consider the possibilities that lie beyond the scope of an allotted time period. This means that almost every team that wins the NBA championship is crowned the best team of the season. But what if the same teams replayed the same series a second time? Perhaps a third time? Hypothetically, if we could simulate these environments thousands of times, we’d have a much better idea of how good players and teams were during certain seasons. Because, after all, a lot of confounding results that don’t align with observable traits could be nothing more than random variance.

    So, with the bulk of this discussion concluded, let’s go over the biggest points in interpreting an impact metric score. When working with larger samples that span at least a season, perhaps the largest factor to consider is role sensitivity. Because these metrics only estimate how valuable a player is in his specific context, these aren’t estimates of how good a player would be across multiple environments. So in this sense, “value” versus “goodness” has some separation here. Look at these measures as ballparking values for how a team’s point differential shifts with versus without a player, subject to the inflations or deflations that come along with the circumstances of a player’s role and fit on that team. The next part of this relates back to assessing scores in groups. A simple averaging won’t suffice; each metric was made differently and should be treated as such. Instead, I prefer to use these different calculations of impact as a reference to which types of data prefer different types of players. So while almost all of Westbrook’s value can be detected by the box score, often with some overestimation, someone like Curry, who provides a lot of unseen gravity and rotational stress, won’t have a lot of his more valuable skills in consideration with these measurements. The last, and arguably the most important, is to interpret an impact metric based on its duration. Similar to how an RAPM model’s scores should be interpreted relative to its lambda-value, an impact metric score should be interpreted relative to its sample size. After ten or twenty games, they may be more “right” than they are “wrong,” but they aren’t definitive measures of a player’s situational value, and are even more confounded than the limitations of the data that goes into these stats. This means while one player can appear to be more valuable to his team, when in fact the counterpart in this example will prove to have done more in the long run.

    So the next time you’re on Basketball-Reference or FiveThirtyEight or wherever you get your stats, I hope this helps in understanding the values of these metrics and how to use them appropriately and in their safest contexts, Thanks for watching everyone.


  • MLB GOAT: Evaluating a Baseball Player

    MLB GOAT: Evaluating a Baseball Player

    My last post, which covered an introductory example of adjusting century-old stats for inflation in the MLB, was the first step is a larger goal, one that will be brought to life with the processes I’ll outline today: ranking the greatest MLB players ever. Many times before we have seen an attempt to do so, but rarely have I found a list that aligns with my universal sporting values. Thus, I have chosen to embark on a journey to replicate the results in a process I see to be more philosophically fair: a ranking of the best players of all time with the driver being the value of their on-field impact. However, as I am a relative novice in the art of hardcore analysis in baseball, I’ll be providing a clear, step-by-step account of my process to ensure the list is as accurate as possible.

    The Philosophy

    I’ve come to interpret one universal rule in player evaluation across most to all team sports, which relies on the purpose of the player. As I’ve stated in similar posts covering the NBA, a player is employed by a team for one purpose: to improve that team’s success. Throughout the course of the season, the team aims to win a championship. Therefore, the “greatest” MLB players give their teams the best odds to win the World Series. However, I’m going to alter one word in that sentence: “their.” Because championship odds are not universal across all teams (better teams have greater odds), that means a World Series likelihood approach that considers “situational” value (a player’s value to his own team) will be heavily skewed towards players on better teams, and that would be an unfair deflation or inflation of a player’s score that relies on his teammates.

    The central detail of my evaluation style will be the ideology behind assigning all players the same teammates, average teammates. Therefore, the question I’m trying to answer with a player evaluation is: what are the percent odds a player provides an average team to provide the World Series? This approach satisfies the two conditions I outlined earlier: to measure a player’s impact in the way that appeases the purpose of his employment while leveling the field for players seen as “weaker” due to outside factors they couldn’t control. Thus, we have the framework to structure the evaluations.

    The Method

    To measure a player’s impact, I’ll use a preexisting technique I’ve adopted for other sports, in which I estimate a player’s per-game impact (in this case, this would be represented through runs per game). For example, if an outfielder evaluates as a +0.25 runs per game player on offense and a 0 runs per game player on defense, he extends the aforementioned average team’s schedule-adjusted run differential (SRS) and thus raises the odds of winning a given game with the percent odds that come along with a +0.25 SRS boost. To gain an understanding of how the “impact landscape” works, I laid every qualified season from 1871 to 2020 out for both position players and pitchers to get a general idea of how “goodness” translates to impact. These were the results:

    Note: Offense and fielding use Fangraphs‘s “Off” and “Def” composite metrics scaled to per-game measures while pitching uses Runs Above Replacement per game scaled to “runs above average” – these statistics are used to gauge certain levels of impact. / I split the fielding distributions among positions to account for any inherent differences that result from play frequency, the value of a position’s skill set, and others.

    Offense (all positions)

    Fielding (pitchers)

    Fielding (catchers)

    Fielding (first basemen)

    Fielding (second basemen)

    Fielding (third basemen)

    Fielding (shortstops)

    Fielding (outfielders)

    Pitching (starters)

    Pitching (relievers)

    A large reason for the individual examination of each distribution is to gain a feel for what constitutes, say, an All-Star type of season, an All-MLB type of season, or an MVP-level season, and so on and so forth. The dispersions of the distributions are as listed below:

    Standard DeviationsPosition Players (Off)Starting Pitchers (Pitch)Relief Pitchers (Pitch)Pitchers (Field)Catchers (Field)First Basemen (Field)Second Basemen (Field)Third Basemen (Field)Shortstops (Field)Outfielders (Field)
    -4-0.554-1.683-0.582-0.305-0.262-0.255-0.256-0.258-0.258-0.286
    -3-0.402-1.262-0.437-0.233-0.183-0.202-0.185-0.188-0.178-0.221
    -2-0.250-0.841-0.291-0.162-0.104-0.149-0.115-0.118-0.097-0.157
    -1-0.098-0.421-0.146-0.090-0.025-0.096-0.044-0.048-0.017-0.092
    00.0540.0000.000-0.0180.053-0.0430.0260.0220.064-0.028
    10.2060.4210.1460.0530.1320.0100.0970.0920.1440.037
    20.3580.8410.2910.1250.2110.0630.1680.1620.2250.102
    30.5101.2620.4370.1970.2900.1160.2380.2320.3050.166
    40.6621.6830.5820.2690.3680.1690.3090.3020.3850.231

    These values are used to represent four ambiguous “tiers” of impact, with one standard deviation meaning “good” seasons, two standard deviations meaning “great” seasons, three standard deviations meaning “amazing” seasons, and four standard deviations meaning “all-time” seasons, with the negative halves representing the opposites of those descriptions. Throughout my evaluations, I’ll refrain from handing out all-time seasons, as these stats were taken from one-year samples and are thus prone to some form of variance. Therefore, an “all-time” season in this series will likely be a tad underneath what the metrics would suggest.

    There are also some clear disparities between the different fielding positions that will undoubtedly affect the level of impact each of them can provide. Most infield positions seem to be above-average fielders in general, with the first basemen showing greater signs of being more easily replaced. The second and third basemen share almost the same distribution while the shortstops and catchers make names as the “best” fielders on the diamond. I grouped all the outfielders into one curve, and they’re another “low-ceiling” impact position, similar to pitchers (for whom fielding isn’t even their primary duty). It’ll be important to keep these values in mind for evaluations, not necessarily to compare an average shortstop and an average first baseman, but, for instance, an all-time great fielding shortstop versus and an all-time great fielding first baseman.

    The Calculator

    Now that we have the practice listed out, it’s time to convert all those thoughts on a player to the numeric scale and actually do something with the number. The next step in the aforementioned preexisting technique is a “championship odds” calculator that uses a player’s impact on his team’s SRS (AKA the runs per game evaluation) and his health to gauge the “lift” he provided an average team that season. To create this function, I gathered the average SRS of the top-five seeds in the last twenty years and simulated a Postseason based on how likely a given team was to win the series, calculated with regular-season data in the same span.

    Because the fourth seed (the top Wild Card teams) is usually better than the third seed (the “worst” division leader), and the former would often face the easier path to the World Series, a disparity was created in the original World Series odds: in this case, a lower seed had better championship odds. To fit a more philosophically-fair curve, I had to take teams out of the equation and restructure the function accordingly. This means there is a stronger correlation to title odds based on SRS, separate from seeding conundrums; after all, we want to target the players with more lift, not the other way around. Eventually, this curve became so problematic I chose the more pragmatic approach: taking and generalizing real-world results instead of simulating them and found the ideal function with an R^2 of 0.977. (This method seemed to prove effective not only because of the strength of the fit, but the shape of the curve, which went from distinctly logarithmic (confusing) to distinctly exponential.)

    The last step is weighing a player’s championship equity using his health; if a player performed at an all-time level for 162 games but missed the entirety of the Postseason, he’s certainly not as valuable as he would’ve been if he’d been fully healthy. Thus, we use the proportion of a player’s games played in the regular season to determine the new SRS, while the percentage of Postseason games played represents the sustainability of that SRS for the second season. The health-weighted SRS is then plugged into the championship odds function to get Championship Probability Added!

    Significance

    With my new “World Series odds calculator,” I’ll perform evaluations on the best players in MLB history and rank the greatest careers in history. I’ll aim to rank the top-20 players ever at minimum, with a larger goal of cranking out the top-40. With this project, I hope to shed some light on these types of topics in a new manner while, hopefully, sparking discussion on a sport that deserves more coverage nowadays.