Bracketology Trip: NCAA Mock Selection Committee: Part Three

    
February 17th, 2012

Shawn Siegel is taking part in the NCAA's "Mock Selection Committee" in Indianapolis. Each day, he's reporting on the day's events and answering questions on Twitter @collegehoopsnet. Follow the action as the bracket is finalized on Friday.

 

The NCAA first held the media "Mock Selection" in 2007 as part of an ongoing effort to improve the transparency in what was once a very secretive NCAA Tournament selection process. Of course, with greater transparency comes greater scrutiny. And although the NCAA Selection Committee usually ends up with a pretty darn good field of 68, there is much about the process that deserves to be scrutinized.

 

What Does "Best" Really Mean?

To me, the largest glaring problem with the selection process is the broad question of: What are they actually selecting? The NCAA's official position is that they select "the 37 best at-large teams." But "best" can mean two very different things. Best can mean a) The 37 teams that had the best results before Selection Sunday, or best can mean b) The 37 teams that the committee thinks will do the best in the days after Selection Sunday. This is a huge distinction, although the committee chair and others from the NCAA didn't seem to understand or care about the distinction.

Let me give you an example: Alabama is under a bit of strife with multiple player suspensions, and it was unknown whether the players would return for remaining games. (In the hypothetical mock selection, the remaining games would only be NCAA Tournament games since the season hypothetically ended today.) Some committee members suggested that  the status of those players would hugely alter their seeding, or whether they even made the tournament. This suggests that the seeding/selecting is based on how the team will fare going forward, but yet the committee chair (the real one, not the mock one) Jeff Hathaway said specifically the committee does not "project" the future. Based on the totality of their regular season, Alabama might be considered the 35th best team, but without those players, they might be considered the 100th as of tomorrow. It is unclear how this issue should be resolved because of the NCAA's ambiguity on what the whole purpose of the tournament is. On the one hand, the chairman says its simply about the "totality of the season", but on the other hand, its not.

 

(This leads to a larger question about the lack of real criteria for the selection process. Although the committee chair spoke of criteria, the committee members can pretty much bring whatever thoughts on a team they feel like to the table, and thus the criteria are not really criteria, but simply vague preferences.)

 

The RPI Is One of Many Factors.. But Not Really

The second glaring problem is how the NCAA presents their statistics to the committee members. The NCAA says, "The RPI is one of many resources/tools available to the committee." In some sense, this is true, the committee has access to other resources (including but not limited to the human polls, other rankings, and personal perspectives on teams from advisory members). However, the major information interface that the committee members sees is entirely based on the RPI. For example, the "Team Sheet" includes RPI, records vs RPI Top 50, Top 100 etc, Avg RPI win, AVG RPI loss, all the opponents with their RPI next to them, etc. Nowhere on the main interface is any other statistical measure referenced.

Now, I don't have a major bone to pick with the RPI. To me, it is no better or worse than other ratings (Sagarin, Pomeroy, etc), but it is clearly different. It is one perspective amongst many. All one has to do is see the wild computer variations in a team like Cincinnati (buried at 93rd in the RPI, but up at 41st in Pomeroy's rating) to see how different these two computer systems see things.

The committee will say that the RPI is not the "only" or "main" rating they use, but it is by far the most prominent (by a wide, wide, wide margin.) Whether or not committee members solely focus on the RPI, there has to be an unconscious or conscious effect of seeing Cincinnati 93rd over and over (vs 41, 41, 41).

 

Odd Reasoning

The NCAA's retort for why the RPI is so prominent was two-fold: a) there has to be some underlying statistical analysis to present the information in a clear and easy fashion, and b) we can all agree the top RPI teams are "really" the good ones and the bad ones the bad ones (and the top teams in the RPI are similar to the top one's in other ratings), and since it is only part of the puzzle.. so be it.

To counter the first retort: This is true, one unifying system is preferred. Being presented with 6 team sheets for one team, multiplied by 100 teams, multiplied by the dozens of times each team is looked at, would be unmanageable. So agreed, there should be one set of numbers to organize the information. But the question remains: why the RPI? Or, as some of the media members suggested, why not an aggregate/consensus ranking of the RPI, Sagarin, Pomeroy, etc. As shown by the Cincinnati example (just one of many), there is huge variation amongst the ratings (especially once you get out of the Top 10) and a consensus would help remove the noise.

To counter the second retort: That's just ridiculous.

 

It's Not All Bad. Though There's Still More Bad.

I don't want to make it seem like the Selection Process is all bad. It's not. (Though I could write a whole article criticizing the shocking focus on the "eye-ball" test and live/tv analysis of teams as opposed to cold, hard numbers. Committee members are encouraged to use subjective feelings about how good teams "look" in their decision-making process. There were echoes of "Moneyball" here. The NCAA Selection Committee is unfortunately the Art Howe of "Moneyball" and not the Peter Brand/Paul DePodesta. If these guys showed up at a fantasy baseball draft, these are the guys who would pick AJ Burnett ahead of CC Sabathia because they saw him throw "nasty" stuff a few times.)

 

Okay, Finally, The Good

Anyways, back to the good. The good is that the committee cares. A lot. They spend a ton of time dissecting and discussing the pros and cons of each team. They prepare by watching and attending tons of games (whether this should matter or not is besides the point. The point is they put in time.) They are proud of being part of the process. They take precautions to remove personal biases from the process. The shuffle and sort ad nauseum. (Literally, I was a bit nauseous following an ill-chosen dinner break of hot wings.)

And they care a ton about getting things right. The only problem is that what they're getting at is ambiguous at "best".

I'll have more to discuss later tonight and tomorrow once the field is finalized. There are fan questions that remain to be answered, as well as more criticisms to discuss. There are also a few more positives to discuss as well.