Commentary

Re-examining the at-large "eye test"

Originally Published: April 15, 2009
By Joe Lunardi | ESPN.com

"When you win the regular-season championship with a 16-2 record in a conference ranked as the eighth-toughest in the country, you win eight of your last nine games, 16 of your last 19 and play the 11th-toughest nonconference schedule in the country, then the NCAA's selection committee has made a tremendous error. I am shocked and extremely hurt for our players and our conference."

Sound familiar? Let's play "Name That Team:"

[+] EnlargeDana Altman
AP Photo/Nati HarnikDana Altman's Creighton team didn't receive an at-large bid despite winning 12 of its final 14 games.
• Creighton, 2009? Nope, the Bluejays won "only" 12 of their last 14 games in the country's ninth-toughest league. They also didn't play a nonconference schedule close to No. 11 in the country. Bye-bye, Creighton.

• San Diego State, 2009? Nope, the Aztecs had the nerve to lose at the buzzer to the nation's No. 9 RPI team in their conference championship game. The best they could manage was a trip to the NIT semifinals. Bye-bye, San Diego State.

• Dayton, 2008? Nope, the Flyers suffered a couple of key injuries down the stretch, negating big wins versus Pitt and at Louisville. Didn't matter that they recovered to win four of their last five. Bye-bye, Dayton.

• Drexel, 2007? Nope, the Dragons couldn't overcome a seven-point loss to VCU (27-6) in the CAA quarterfinals. This despite regular-season wins at NCAA qualifiers Villanova, Old Dominion and Creighton, plus additional road victories at Vermont (25-7), Syracuse (22-10), Saint Joseph's and Temple. Bye-bye, Drexel.

Perhaps you've detected a pattern here. The pattern is nope, nope, nope and nope. As in when "doing everything you reasonably can" isn't enough to get into the NCAA tournament ahead of the big boys.

Funny thing, though. The quote at the top of this column doesn't come from one of the aforementioned aggrieved "little guys," and it isn't very recent. It goes all the way back to the first year of the 64/65-team tournament era.

These comments belong to Gale Catlett, the head coach at West Virginia in 1985. In those days WVU played in the Atlantic 10, long before there was a BCS and before a football designation was supposed to matter in a sport where the ball is round and not oblong.

Can you imagine the outrage today if, as a member of the Big East, West Virginia posted similar credentials and did not receive an at-large bid? I daresay there would be fully armed Mountaineers storming the NCAA palace.

My point, then and now, is simply this: There is no way to really know if there were 34 better teams than West Virginia in the 1985 at-large pool. But we can know -- or at least reasonably assess -- if there were 34 more qualified teams. And this is my only significant difference with the NCAA men's basketball committee, a group whose dedication to "getting it right" remains unsurpassed.

A few years back, a conscious and well-meaning distinction was made between the 34 "best" teams for at-large consideration and the 34 "most deserving." The former took precedence over the latter, whereas for me they had always been one and the same. I disagreed with the distinction at the time, and I am fundamentally opposed to it today.

We play a fairly lengthy season in college basketball, and after all teams play 30-plus games, it becomes fairly clear which teams have achieved -- and thus "earned" -- the most. Sure, you can split hairs here and there over the records of the final few teams under consideration, but in the realm of objective versus subjective evaluation, the errors are fewer, and what remains are the facts.

When said facts are secondary to the so-called "eye test," we venture into the realm of the truly arbitrary. The statement "I think Notre Dame is better than Siena" is little more than an opinion, unless or until it is supported by some kind of evidence. It's not that much different than saying "I think Ford is better than Chevy," or "Coke is better than Pepsi."

What's missing is the rest of the sentence. "I think Notre Dame is better than Siena because …." And the answer can't be "Luke Harangody would dominate the MAAC." Notre Dame doesn't play in the MAAC, so what the Irish achieved -- or Siena, or Wisconsin, or Drexel in 2007 -- can only be judged in the context of their opportunity for achievement. Fundamentally, it's the difference between what we actually know instead of what we merely think.

Look at it in comparison to other sports. Does David Stern, at the end of an NBA season, adjust the playoff seedings earned over an 82-game schedule because he thinks Team B is better than Team A, even if Team A won 10 more games? Does Bud Selig award the batting title to the guy with the fifth-best batting average in the league because he thinks that player is better than one who actually had more hits?

The answer to these (and similar) questions is a resounding "Of course not." No reasonable fan base would stand for such arbitrary judgments in the face of legitimate evidence to the contrary. It would probably come across more like figure skating or gymnastics, sports in which unseen judges declare seemingly arbitrary winners, as opposed to the more accepted culture of "keeping score" for all to see.

I'm not suggesting the evidence for NCAA at-large teams is anywhere near as clear-cut as a major league batting average. That doesn't make it impossible to find, however. Like most things, you just have to know where to look. Consider:

• The NCAA maintains that it only uses the RPI as an organizational tool, yet every team data sheet available to the committee is stuffed with RPI breakdowns. Teams are voted into the tournament because of things like top 50 wins (Arizona) and excluded because of an RPI subset like a sub-300 nonconference schedule (Penn State). That sounds like more than organization to me; it sounds like applied evidence.

• Let's apply a little more RPI-based evidence. Take a look at the following 2009 bubble teams based on their RPI, nonconference RPI, conference RPI and road/neutral RPI and record versus the RPI top 100. For good measure, I'm throwing in more qualitative measures from Ken Pomeroy and my own Adjusted Scoring Margin (ASM):

• My across-the-board ranking of these six teams, which in reality were competing for the final two at-large spots in this year's NCAA field, goes like this: (1) Team F; (2) Team A; (3) Team C; (4) Team D; (5) Team B; and (6) Team E. I would argue further that the top three teams in this group (F-A-C) have more reasons to be selected than excluded; while the bottom three (D-B-E) have the opposite characteristics. So I won't argue over which two of the top three get into the field, but I will object if one of the bottom three vaults over them.

• Let's now put team names into the chart and rank them as I would.

In reality, we know that Wisconsin (No. 12) and Arizona (No. 12) were the lowest seeds among this year's at-large teams. And the more I've thought about Arizona -- its Sweet 16 appearance notwithstanding -- the more discouraged I am about how the Wildcats were evaluated.

To me, a team can't just look better. It has to be better. It can't just have presumed NBA talent; it has to actually win something. In other words, teams should be required to earn their way into the field as opposed to getting a semi-arbitrary pass. The latter is the college basketball equivalent of a random score from the Russian judge. We can do better than that.

Tomorrow, we'll demonstrate beyond a reasonable doubt why any definition of "best" must logically include the nation's "most deserving" teams. And how a working solution to this dilemma is much less complicated than you might think.

Joe Lunardi is the resident Bracketologist for ESPN, ESPN.com and ESPN Radio. Comments may be sent to bracketology@comcast.net.

Joe Lunardi | email

Senior Writer, ESPN.com