Category: review


The Achievement S-Curve – 2013 Final

March 18th, 2013 — 9:51pm

Selection Sunday 2013 is in the books. Time to release the final Achievement S-Curve of 2013 and see how it compares to the actual bracket.

The 2013 Achievement S-Curve (click twice to embiggen):

Achievement S-Curve 130318 Continue reading »

2 comments » | College Basketball, descriptive, March Madness, predictive, review, team evaluation

Conference Tournament Predictions – Update 3/14/2013

March 14th, 2013 — 1:17am

As I laid out in my introductory post, I am laying out my conference tournament predictions in order to compare them to other predictions out there. The two that I know of are Ken Pomery’s Log5 predictions and TeamRankings predictions.

In that first post, I proposed a sum of squared errors measure to score each system. After talking with multiple people much smarter and more well-versed in this area than me, I settled on using a logarithmic scoring rule. One way to grade each system’s predictions would be to apply the log to the “winning” probability of each game (for instance if the winning team was given a 75% chance to win, the score for that game would be the log(.75); if the other team were to win, the score would instead be log(.25)). However, each set of predictions simply gives the probability of each team advancing to each round, so we don’t have individual game probabilities. As a replacement, I decided to grade each team based on the predicted odds that they would go exactly as far as they did. Say Team A won their 1st round and quarterfinal games but lost in the semifinals. If the prediction said they had a 75% to make it to the semifinals and a 50% chance to win in the semifinals, then the chance that they win the quarters but lose the semis is 75% – 50% = 25%. Thus, the score for Team A is log(.25). This double counts games, but it double counts every game, so there shouldn’t be a bias. Continue reading »

5 comments » | College Basketball, Conference Tournament predictions, predictive, review, simulation, team evaluation

Conference Tournament Predictions – 2013

March 5th, 2013 — 9:24pm

Conference tournaments got under way today with the 1st Round of the Big South and Horizon tournaments. This year I’m going to put my predictions on “paper” and compare them to some other predictions out there, notably Ken Pomeroy’s Log5 predictions and Team Rankings conference tourney predictions. If you know of any other posted predictions out there, let me know.

My predictions as well as KenPom and TeamRankings give the percentage chance of each team advancing to each round of every conference tournament. To grade each set of predictions, I’ll use the sum of squared error for each game winner. For example, let’s take Ken Pomeroy’s prediction of Charleston Southern in the Big South tournament:

                    Qtrs Semis Final Champ
1S Char Southern     100  77.2  50.5  31.6

So Charleston Southern has a 100% chance of reaching the quarterfinals (they have a bye), a 77% chance of reaching the Semifinals, 51% chance at making the final, and a 32% chance of grabbing the conference’s automatic bid. If Charleston Southern were to make the semifinals, for instance, KenPom’s prediction would receive (1 – .772)^2 “error points”, which comes out to .052. The fewer the error points, the better the predictions did. If, for instance, Longwood reaches the semis, KenPom’s ratings would suffer for their 2% prediction of that happening. That would give (1 – .020)^2, or .960 error points. In fact, Longwood did pull of the 1st Round upset and is just one game from reaching the semis.

It’s finally March and I see no reason why we need to wait for Selection Sunday to fill out some brackets when we have 31 perfectly good conference tournaments to predict. Let the Madness begin.

Comment » | College Basketball, Conference Tournament predictions, predictive, review, team evaluation

An Improved Look at Pre-season Strength of Schedule

August 29th, 2012 — 12:17am

As we gear up for another NFL season kicking off in just over a week, there will be lots of discussion of Super Bowl contenders and playoff predictions. Which teams will improve and which will decline. One of the big and often over-looked factors in these exercises is a team’s strength of schedule.

Often, when the schedule is released, you’ll see attempts at determining the most difficult schedules like this one that use the previous season’s records to determine the quality of the opponent for each game. While this is a reasonable starting point, it definitely has its flaws.

What’s wrong with traditional Strength of Schedule measures? Continue reading »

Comment » | Football, predictive, review, team evaluation

What the RPI is and what it is not

February 23rd, 2012 — 9:42pm

Earlier today on CBSsports.com, Matt Norlander wrote an article about the much-maligned RPI. He comes to this conclusion:

If anything else, this chart proves there are far too frequent communication breakdowns with teams across the board, enough so that the RPI goes beyond outlier status and continues to prove what many have known for years: If the RPI was introduced in 2012, it’s hard to reason that it would be adopted as conventional by the NCAA or in mainstream discussion.

Norlander then provides the heart of his argument, a table comparing the RPI to various other basketball ratings: Sagarin (overall), KenPom, LRMC, Massey and BPI. He points out that “Texas, Belmont, Arizona and Southern Miss all have big disparity as well. The largest gaps are UCLA (62 points lower in the RPI) and Colorado State (65 points higher in the RPI).”

The RPI is a rating created to measure what a team has accomplished so far this season based on their record and their strength of schedule. It is a descriptive rating. LRMC, Massey, BPI, and Sagarin are predictive ratings at their core (though some are even worse, a random combination of descriptive and predictive). Comparing the RPI to these ratings and concluding that because it doesn’t match, it is flawed, is itself a terribly flawed argument. Of course it doesn’t match, it is trying to measure a completely different thing. I agree, the RPI is flawed, but not because of this.

Norlander’s article should have been about his preference for selecting and comparing teams based on their true strength instead of their resume, and not about the quality of the RPI which has little to do with this debate. Even if the RPI perfectly did it’s job (of measuring how much to reward teams for their performance on the season), it would have failed the test in this article. Let’s take a deeper look. Continue reading »

Comment » | College Basketball, descriptive, March Madness, predictive, review, team evaluation

BCS Series: Review of Colley ratings

December 3rd, 2011 — 2:59pm

For those that have read the first five installments of my BCS Ratings review, you’ll notice one major theme: nobody publishes their full methodology for how they calculate their ratings. Many of them are a “black box” where the inputs go into, some magic happens, and the output comes out. Well, the final review is of the Colley Matrix rating system and he publishes his entire methodology. Finally! Continue reading »

4 comments » | BCS Series, College Football, Football, review, team evaluation

BCS Series: Review of Massey ratings

December 3rd, 2011 — 2:22pm

The Massey ratings have been around since December of 1995, according to his site. The explanation he lists is actually for his rankings that include scoring margin, and not those that are used in the BCS (which can’t use score margin).

However, perhaps we can derive some understanding of Massey’s BCS ratings if they are calculated similarly to his other ratings. Continue reading »

1 comment » | BCS Series, College Football, Football, review, team evaluation

BCS Series: Review of Wolfe ratings

October 26th, 2011 — 10:54pm

Continuing with my review of BCS computer rating systems, the 4th of the 6 systems in my series is Dr. Peter Wolfe’s ratings.

On his site, Wolfe only gives a brief explanation:

We rate all varsity teams of four year colleges that can be connected by mutual opponents, taking note of game locations….The method we use is called a maximum likelihood estimate.  In it, each team i is assigned a rating value πi that is used in predicting the expected result between it and its opponent j, with the likelihood of i beating j given by:

 π/ (πi + πj)

The probability P of all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game.  The rating values are chosen in such a way that the number P is as large as possible.

First thing to note is that Wolfe rates all teams from FBS through Division III and even NAIA. He includes all games between any two varsity teams at any level. Other systems, like Sagarin, only rate Division I teams. Some only rate the FBS teams. I am not sure any one method is more “right” than the others, but it is odd that the BCS allows different systems to rate different sets of teams. Continue reading »

5 comments » | BCS Series, College Football, Football, review, team evaluation

BCS Series: Review of Anderson & Hester ratings

October 20th, 2011 — 10:13pm

In the third installment of my review of the BCS computer rankings, I will take a look at the ratings of Anderson and Hester. For starters, they have a great tagline on their website: “showing which teams have accomplished the most”. For those of you that have been following, you know my stance on how teams should be judged for inclusion to the BCS title game and this fits perfectly.

Anderson and Hester don’t give many details about their system, but they do highlight four ways in which they believe their ratings to be distinct. Let’s take them one by one. Continue reading »

2 comments » | BCS Series, College Football, Football, review, team evaluation

BCS Series: Review of Billingsley ratings

October 19th, 2011 — 10:52pm

Next up in my review of the computer ranking systems in the BCS is Richard Billingsley. He gives a much more detailed explanation of his ratings on his website: read them here. I will pull out pertinent parts of his explanation and comment. Let’s start with his summary.

 I guess in a sense, my rankings are not only about who the “best team” is, but also about who is the “most deserving” team.

This is a decent start. As I have touched on before, I believe that postseason play–whether it be the BCS, NCAA Tournament, or NFL playoffs–should be a reward for the most deserving as opposed to the “best” teams. However, people get into trouble when they try to satisfy both types of ratings: predictive and descriptive. By straddling the line, ratings suffer from trying to do too many things. Focusing on answering just one question will provide the best, purest, most useful answer. Continue reading »

6 comments » | BCS Series, College Football, Football, review, team evaluation

Back to top