Continuing with my review of BCS computer rating systems, the 4th of the 6 systems in my series is Dr. Peter Wolfe’s ratings.
On his site, Wolfe only gives a brief explanation:
We rate all varsity teams of four year colleges that can be connected by mutual opponents, taking note of game locations….The method we use is called a maximum likelihood estimate. In it, each team i is assigned a rating value πi that is used in predicting the expected result between it and its opponent j, with the likelihood of i beating j given by:
πi / (πi + πj)
The probability P of all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game. The rating values are chosen in such a way that the number P is as large as possible.
First thing to note is that Wolfe rates all teams from FBS through Division III and even NAIA. He includes all games between any two varsity teams at any level. Other systems, like Sagarin, only rate Division I teams. Some only rate the FBS teams. I am not sure any one method is more “right” than the others, but it is odd that the BCS allows different systems to rate different sets of teams.
On to the meat of the rating system, Wolfe uses a “maximum likelihood estimate”. But don’t fret, this is just a mathematically fancy way of saying that he is trying to find the ratings for each team that best explain the game results that we see. According to his formula, the likelihood of team i beating team j is calculated simply as the rating of team i divided by the sum of the two teams’ ratings. For example, let’s take Wolfe’s top two ranked teams: Oklahoma St. and Alabama. Oklahoma St. has a rating of 9.126, while Alabama comes in at 9.038. Thus, Oklahoma St. is 9.126 / (9.126 + 9.038) = 9.126 / 18.164 = 50.2% likely to beat Alabama. And against a weaker team like, say, Utah St. (rating of 3.749), Okie State would be 70.9% likely to win.
So what Wolfe’s ratings do is use some mathematical formulas to find the ratings that best explain the game results so far this season. This is a sound way of computing ratings. One thing that remains unclear, however, is how location of games is included. Wolfe does note that the system looks at all games “taking note of game locations”. However, he does not say how they do this.
One last note. Thanks to Austin Link who commented on my Anderson & Hester ranking and shared a link to something similar he did last year. On there, Austin notes with regard to the Wolfe ratings that “one problem is that under that method all undefeated teams should have infinitely high ratings. Since this doesn’t happen he presumably includes some limiting factor, but it’s likely sort of arbitrary, reducing the mathematical rigorousness.” Since undefeated teams are so important in BCS ratings–much more so than any other rating system I have seen (what other rating system is almost solely concerned with undefeated teams?!)–how Wolfe deals with this is actually very important.
Wolfe’s ratings are very solid. The only two questions are (1) how does he deal with game locations? and (2) how does he deal with undefeated teams, since all should theoretically have infinite ratings? Overall, Wolfe’s ratings seem to be a worth addition to the BCS computer ratings, though (as with most of these systems) it would be nice to see full methodology to provide a more rigorous review of their quality.
My RWP ranking system (Round-robin Win %) is similar to Dr. Wolfe’s because we both use pairwise comparisons to determine team strengths. This methodology is also found in research papers by Zermelo and later Bradley-Terry, which both are basically the same technique, but B-T didn’t realize nor give credit for work 25 years prior.
To get around the problem of undefeated teams (and note it affects winless teams as well), you simply give each team a phantom tie with every other team. It’s a bit of a data fudge, but you do get decent results (while avoiding infinite and zero ratings). Unlike Wolfe, I do not use ties nor do I use a discrete 0 or 1 to represent a victory. Instead, I use a GOF (splitting the 1 point) which gives much better ranking results and in turn is closer to general overall consensus.
Very interesting Pat. By splitting up the victory using a GOF you are turning your ranking into more of a predictive system, so I think it makes sense you get better results. To be used by the BCS, systems should only use wins and losses, so Wolfe’s implementation satisfies that requirement.
Thanks for shedding light on how the undefeated/winless issue is handled. Sounds like that will move teams toward the average but should leave the rank order intact.
Similar to what Pat wrote above, Wayne Winston commented in his “Mathletics” book that one suggested method of dealing with undefeated teams is to give every team a Win and a Loss versus a fictitious team. That might produce slightly different results than Pat’s method of a tie versus every other team. Winston’s comment was attributed to a research paper by Mease.
Jeff/3, that should have a similar effect to the phantom ties. The Colley matrix system, which I will get to shortly, does the same thing. Essentially what that does is add a prior (an initial guess, basically, for the non-nerds out there) of .500 for each team with a weight of 2 games. So after two games, you are guessing that each team is equal parts a .500 team and whatever their current record is. Again, shouldn’t change the rank order, but should squish all teams back towards .500. Whether or not that matters depends on what you want to do with the ratings.
i like how everyone say that $60 should be to much, when the price will adventually go down. im surpirsed that mw2 feels like $30 now at gamestop. i wished i would of waited cost . to get it but oh well. but then again i dont why don’t you wait for dead space 2 to become $30 making it -__-