For those that have read the first five installments of my BCS Ratings review, you’ll notice one major theme: nobody publishes their full methodology for how they calculate their ratings. Many of them are a “black box” where the inputs go into, some magic happens, and the output comes out. Well, the final review is of the Colley Matrix rating system and he publishes his entire methodology. Finally! (more…)
Archive for the ‘BCS Series’ Category
The Massey ratings have been around since December of 1995, according to his site. The explanation he lists is actually for his rankings that include scoring margin, and not those that are used in the BCS (which can’t use score margin).
However, perhaps we can derive some understanding of Massey’s BCS ratings if they are calculated similarly to his other ratings. (more…)
Continuing with my review of BCS computer rating systems, the 4th of the 6 systems in my series is Dr. Peter Wolfe’s ratings.
On his site, Wolfe only gives a brief explanation:
We rate all varsity teams of four year colleges that can be connected by mutual opponents, taking note of game locations….The method we use is called a maximum likelihood estimate. In it, each team i is assigned a rating value πi that is used in predicting the expected result between it and its opponent j, with the likelihood of i beating j given by:
πi / (πi + πj)
The probability P of all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game. The rating values are chosen in such a way that the number P is as large as possible.
First thing to note is that Wolfe rates all teams from FBS through Division III and even NAIA. He includes all games between any two varsity teams at any level. Other systems, like Sagarin, only rate Division I teams. Some only rate the FBS teams. I am not sure any one method is more “right” than the others, but it is odd that the BCS allows different systems to rate different sets of teams. (more…)
In the third installment of my review of the BCS computer rankings, I will take a look at the ratings of Anderson and Hester. For starters, they have a great tagline on their website: “showing which teams have accomplished the most”. For those of you that have been following, you know my stance on how teams should be judged for inclusion to the BCS title game and this fits perfectly.
Anderson and Hester don’t give many details about their system, but they do highlight four ways in which they believe their ratings to be distinct. Let’s take them one by one. (more…)
Next up in my review of the computer ranking systems in the BCS is Richard Billingsley. He gives a much more detailed explanation of his ratings on his website: read them here. I will pull out pertinent parts of his explanation and comment. Let’s start with his summary.
I guess in a sense, my rankings are not only about who the “best team” is, but also about who is the “most deserving” team.
This is a decent start. As I have touched on before, I believe that postseason play–whether it be the BCS, NCAA Tournament, or NFL playoffs–should be a reward for the most deserving as opposed to the “best” teams. However, people get into trouble when they try to satisfy both types of ratings: predictive and descriptive. By straddling the line, ratings suffer from trying to do too many things. Focusing on answering just one question will provide the best, purest, most useful answer. (more…)
Jeff Sagarin produces some of the most respected ratings, not just for college football but for the NBA, NFL, college basketball, and others. His ratings, found here, include both a Predictor and an Elo Chess rating. The Predictor rating includes margin of victory and is intended to, well you guessed it, predict future games. In other words, it is a measure of the quality of a team. We are concerned with his other rating, the Elo Chess, which is the one used by the BCS. This rating considers only wins and losses, and in Sagarin’s words “makes it very “politically correct”.” (more…)
In the lead up to March Madness, I wrote about determining which teams are the “most deserving” as opposed to which teams were the “best”. I eventually created what I called the Achievement S-Curve (in college basketball, the S-Curve refers to ranking and seeding teams for the tournament), essentially a rating of teams based on what they accomplished on the court.
With the initial BCS rankings released this week, I’d like to do something similar for college football. However, before revealing my rankings, I’ll first go through and discuss each of the six computer rankings in use by the BCS. I’ll point out what they do well and critique what they don’t. Following that, I’ll unveil my own Achievement Rankings. In addition, I’ll look at some other interesting aspects of the BCS system along the way: What’s the best way to make the title game? Who are this year’s best contenders? And, of course, would a playoff system be a better alternative to crowning a national champion?
If you have anything you’d be interested in seeing, post in the comments and I’ll see if I can add it in to the list. First up: a review of Jeff Sagarin’s rankings.