I should start with a disclaimer. I am neither employed in the world of sports nor officially as a statistician. Though at times in my profession I must execute many of the same actions of a statistician, that is not my official job title. I am more properly labeled a research analyst (I have an Engineering background), which involves thoroughly analyzing huge amounts of data for trends and often times performing statistical testing on those numbers. It is an important distinction (I think), and in the interest of full disclosure, something I should mention. I bring this up only because I do not intend to mislead anyone with the blog title of "The Sportstatician." I thought it was clever, so I chose it. With all of that stated at the beginning, I'll move on.
Having for a long time been interested in both math and sports, it was a few years ago that I decided to combine the two into what has become my favorite hobby. Using pretty simple concepts of mathematics, I developed my own rating system to rank college basketball and football (college and pro) teams. I will not claim to have invented this method, as it is quite simple and most likely has been in use by someone, somewhere for a long time. I know Ken Pomeroy uses a quite similar method in his ranking of college basketball teams, and although I came upon this method on my own, I will not claim novelty nor genius. What I have found though is the method is quite powerful at evaluating properly what has happened and what will (read: should) happen.
Having for a long time been interested in both math and sports, it was a few years ago that I decided to combine the two into what has become my favorite hobby. Using pretty simple concepts of mathematics, I developed my own rating system to rank college basketball and football (college and pro) teams. I will not claim to have invented this method, as it is quite simple and most likely has been in use by someone, somewhere for a long time. I know Ken Pomeroy uses a quite similar method in his ranking of college basketball teams, and although I came upon this method on my own, I will not claim novelty nor genius. What I have found though is the method is quite powerful at evaluating properly what has happened and what will (read: should) happen.
So how do I calculate the ratings? Like Pomeroy, my system assigns both an offensive and defensive rating to each team by adjusting the number of points scored and points allowed to reflect their strength of schedule. This is done in two main steps: a team's points scored are multiplied by the national average points scored and then divided by the average of their opponent's points against. The same method is used for points allowed, but instead the denominator is opponent's average points for. Every team now has an adjusted points scored and points allowed for every game. The second step is the same as the first step, but instead of using the averages, you use the adjusted averages. Almost exactly the same as Pomeroy, except he adds in possessions, which for basketball makes a huge (and beneficial) difference. As I said earlier, I do not claim this method is new, just that it works.
Now every team has an adjusted average points scored and points allowed. To get each teams' rating, I have used Bill James' Log5 and Pyth formulae along with the 2008 schedule to find the best exponent, which for college football is approximately 2.8. I have labeled this the Relative Rating due to lack of imagination. There is a week-adjusted rating (where more recent games count more towards the rating) and a non week-adjusted rating. The week-adjusted is better for predictive purposes and also for correlating to the BCS (more on that in a minute), but I prefer non week-adjusted because it's less biased.
In 2008, USC and Florida had the highest Relative Ratings after the regular season and therefore were determined by the system to be the two best teams. I liked this result, because that was how I subjectively felt, even though in the end the voters and computers disagreed.
Ahh, those BCS computers, they don't care at all about points scored and points allowed. While I understand the reasoning behind it - evidently they think teams will run up the score more than they already do (paging Urban Meyer) - since wins and losses are absolutes they aren't necessarily good indicators of what actually happened on the field. Surely the voters are influenced by margin of victory, but the computers are prohibited from doing so. Alas, that is the way things are done, so in another stroke of unimagination, I developed my SAWP (Schedule Adjusted Win Percentage) Rating.
The SAWP rating method is also done using simple mathematics and the power of Excel's Solver utility. Similar to the Relative Rating, there are only a few main steps to calculating the SAWP. First, three variables are assigned: WP (win points), LP (loss points), and Z (denominator). WP is the value the computer will assign to a win, LP is likewise the value the computer assigns to a loss, and Z is the value the computer uses in the denominator of the following equation:
Wining Team Points = WP / (Z - Win% opp)
Losing Team Points = LP / (Z - Win% opp)
At the beginning, I assign WP = 1, LP = 0, and Z = 1.5, but it really does not matter what they start out at because the computer will solve for the optimal values.
A team's WTP and LTP are then averaged. This value is then correlated to actual win percentages by using the square of the error. I also set the requirements that WP + LP = 1 and that Z = WP + 1. The computer then attempts to minimize the error by changing the values of WP, LP and Z. The result is the SAWP Rating; in 2008 Oklahoma and Texas were the top ranked teams, which is also what the BCS computers concluded.
Using the historical results of the final BCS rankings, the computer calculated out a composite rating of RR and SAWP to determine which two teams will make the BCS championship. The formula works for every year of the BCS championship (1998 and on) except 2008 where it felt Florida should have played Texas. Nothing's perfect.
No comments:
Post a Comment