My Tentative Process of Ranking NBA Players

Ranking players, especially in the widespread arbitrary sense by which most instances occur, essentially has no practical value. But that’s because “player ranking” is often treated as a self-contained thing that looks inward of the result, disregards the implications value-systems have on the processes of team-building and assigning market values. Therefore, while the “result” (a list) of player ranking doesn’t matter for any reason which concerns the on-court product of a basketball game, it serves as an infamous source of entertainment value. Player ranking allows people on the internet to gain or lose their self-esteem vicariously through the quality of their basketball opinions, and thus the human instinct makes the performances all the more memorable.

The “Non-Ideal” Theory of Player Ranking

Basketball does not occur in a vacuum, nor should it as the product of systems within systems. However, the systems of the game impose more boundaries; if the ideal benchmark of a player’s value is his “intersystemic” efficacy, that is what he provides across a variety of systems, there is little room for an intelligible process. Thus, ranking players in this fashion involves to some degree the need to play god, to transpose instances upon others with limited bites of data. Namely, our ideas concern the “non-ideal” axioms we may invoke to provide rigidity in the process, to avoid incoherence. Perhaps there is meaning in working with such limited measuring sticks, encouraging collaboration and the expansion of our worldviews. So, in this post, let us set up a version of a player ranking process that emphasizes a player’s intersystemic value.

The actions of players (parts), in conjunction with decision-making from non-player members of an NBA franchise, positively affect the team (system) by contributing toward the underlying mechanism that wins basketball games: scoring as many points as possible on offense and saving as many points as possible on defense within the time/space constraints of a typical game. The intrinsic difficulty in untangling the process of possessions stems from the degree to which actions are intertwined and indistinguishable among parts, meaning to continue with the task requires an observable number of finite dimensions in which decisions and the ensuing actions occur. From such emerges the models of possessions and practical applications of playbooks, which exist as sets of premeditated actions that describe patterns in players’ actions and their interactions with other players. (Major signs of caution are advised to remain aware of whether or not we censor certain information.)

To estimate the manners in which players contribute to the process of possessions by proxy of his impact on a finite number of models of possessions, we employ a bottom-up approach that evaluates the consistency and efficacy of a player’s actions (in most cases, “skills”) based on varieties of qualitative and quantitative data and data points. Those initial “player profiles” which are intrinsically bound by their intrasystemic natures are then transposed onto intersystemic principles that similarly evaluate changes in consistency and efficacy, which is achieved through generalized pattern recognition of 1) how players of similar profiles tend to change through systems and 2) how varieties of teammates typically change based on their tendencies. The “end result,” the data point estimation which sorts the rankings, is a proxy for a player’s intersystemic value by estimation of how he fuels the successes and failures of possible systems.

Knowledge Through Impartiality

Film study is the most important part of our process, the fundamental “visual” tool which is falsely contrasted with analytics or statistics, the “numerical” tools. The visual aspect takes precedence because of the degree to which it constrains our interpretations of its data; statistics are represented on a far more rigid surface than are observations from tape, which can extend past the crude data point to qualitative analysis. We can observe the minutiae of what constitutes, for example, a play type on  A “post up” is a generalization, a short-hand with which inferences can be made quicker, but not necessarily more effectively. This is why the process requires diligence, a hyperactive form of analysis that trades off between pitfalls and follows the route which will (hopefully) lead us to the “best” possible decision.

Pushing back against generalization is a broader theme in film study. When we search for something, the other things are filtered out in what we may ascribe to noise. But the censoring of information is not necessarily the most desirable course. Remember, we’re looking to emulate the bottom-up approach of how parts interact within systems, so to flow with the process organically will broaden our worldview of what considers contributions and what doesn’t. The resulting observations about interactions and synergies, which are selected to cover wide areas of possible circumstances, are condensed into “tendencies” by which players impact systems.

Statistics aren’t omitted from the process and exemplify a trade-off between bias and variance (analogous to forms of regression modeling) shared with film. Statistics are shorthands that account for a player’s entire time on the court during a given season, Playoffs, career, et cetera, but the tools are biased toward the measurements that are decided upon. Meanwhile, film has the potential for the reduction of bias based on the viewer, but the length of seasons and typical thresholds that decrease the variance of observations would presumably require an inhuman amount of time and energy to overcome. Not all statistics are “good,” as has been proven many times. How many points a player scores per 75 possessions or his relative True Shooting percentage likely isn’t that “important” in this process, especially as self-contained objects. For this process, the most “important” statistics are “tracking” (non-traditional, non-box counting) stats and lineup stats, for their abilities to shed light on tendencies which may be less prone to variation among systems and synergies among parts (WOWY, assist networks).

While on the topic of analytics, there surely must be some mention of “impact” (composite, one-number) metrics! Without them, we’d have virtually no idea the degree to which a player can impact the game outside of an arbitrary, dissonant mental estimate. Though it is important to continually be mindful of their weak spots and how certain modeling techniques may capture one player’s intrasystemic value fairly well, but not another. These are ideally the concluding steps in the process, a crude benchmark that offers strong, rigid methods with which we can connect the actions a player performs with the underlying “impact” on the successes and failures of the systems. 

The Interpretation of Player Rankings

By “ranking” players and devising lists, the purpose is not to create a perfect representation of reality or estimate within some strict interval the degree to which the process produces plausible results. Player rankings are not intended to be a reflection of how one interprets the process of possessions (the higher-dimensional, purely intersystemic basketball), but rather the entertainment-based alternate process by which one can estimate such a reality with a finite number of parameters, all of which are prone to human error, misinterpretation, and reduction. Ranking players is a social experiment, so let us treat it as such!

Leave a Reply

Your email address will not be published.