Wednesday, August 6, 2008
Updated: August 7, 6:26 PM ET
Design the ranking system, not the outcomes
It's unfortunate when a player's ascension to No. 1 is accompanied primarily by discussion of what she hasn't achieved.
That's been a common theme for the women this decade, and it's ironic that it's resurfaced during the same time that such a titanic campaign for the top spot has been waged on the men's side.
First there was Martina Hingis, ending 2000 as No. 1 despite not winning a Slam that year. Then came Kim Clijsters, who got to No. 1 in 2003 without yet having won a major. Now there's Jelena Jankovic, who will become No. 1 without even having reached a Grand Slam final in her career so far.
It's taken a perfect storm of circumstances to produce this scenario -- the vacancy created by Justine Henin's sudden retirement on top of the ongoing problem of frequent injuries and sparing play among the tour's most accomplished players. But when a situation like this arises, it also inevitably creates scrutiny of the ranking system which produced it.
On the whole, it's not surprising that it's come to this. Tennis' ranking systems have, in recent years, increasingly emphasized quantity over quality.
The overall number of tournaments players are expected to play has increased while the tour's most accomplished performers have cut back on their schedules, creating shortfalls that play havoc with the numbers. The formula has also moved from taking an average of all of a player's results to counting only her best results, effectively allowing the pros to erase bad losses by playing more events.
Finally, the extra points that were once awarded for defeating top players have been eliminated. Defeat the No. 3 on the way to the quarterfinals, and you get no more reward than if you had defeated No. 300 to get there.
So it's apt that it's all culminated in the ultimate quantity player of this era: Jankovic. Last year, the Serb played 114 matches (97 in singles and 17 in doubles) -- almost the same number played by Henin and Maria Sharapova combined (118 matches).
The blame for this trend can be laid squarely at the feet of the tours, who have increasingly begun using the system as a way to influence player schedules and taken some of the priority away from the real purpose of the rankings, which is to measure player performance.
No one envies the game's administrators -- the job of having to herd a group of rich, independent and self-oriented athletes to tournaments around the world -- but ultimately that can't be an excuse for slipping up in this element of their most important role -- maintaining fairness in competition.
Rankings determine which players get into tournaments and where they're seeded. In that sense, their role in determining No. 104 and No. 105 -- potentially the difference between getting straight into a Grand Slam and having to play qualifying -- is as or more important than determining who's No. 1 or No. 2.
The ATP has escaped scrutiny on this front recently because the top players have established themselves so convincingly that no system can possibly mess up where they should be ranked. They've also been lucky because the players have largely adhered to showing up at the tournaments where they're supposed to show up, essentially adapting themselves to the requirements.
The WTA has come in for much criticism because of the revolving door of Slamless No. 1s that have come and gone in the past few years, and its inability to get the top players to play the amount required to maintain a high ranking.
In fact, their current systems are essentially similar and have evolved in a similar way, though at a somewhat different pace.
They've gotten away with this slow erosion of the rankings' foundation because attention only tends to focus on the issue when there's a glaring discrepancy between who's No. 1 and who people think should be No. 1.
This fickle spotlight often creates counterproductive results. The kind of quick fixes usually harm the overall integrity of the system because they're intended only to manipulate a few numbers at the top, not deliver a consistent and fair measure across the board. As a result, they usually lead to more quirky outcomes down the road.
Secondly, they often produce glib comments about how the problem of how difficult it is to immediately grasp how rankings are calculated. (Just try making that same argument about your taxes.) The tours' efforts to produce a sound-bite friendly formula have contributed to the system becoming less finely tuned and also prompted some damaging experiments over the years.
The tours need to prioritize -- and we need to demand -- the use of an effective and fair rankings measure, without first worrying about which names get spit out in which positions.
Design the system, not the outcomes. The results will follow.