The Achievement S-Curve: 1/30/2012

Those following along (those that have not, start here, here, and here) know that the goal of the Achievement S-Curve is to reward teams for what they have accomplished on the court. Wins and losses count. Strength of schedule counts. Scoring margin, the eye test, true team strength…they don’t count.

There are good arguments against simply selecting and seeding teams based on who is the most deserving as opposed to just the best teams. For one, some people simply prefer to select the best teams and see them go at it in the tournament. Secondly, while seeding teams based on achievement rewards the top teams with good seeds and likely easier paths in the tournament, you may sometimes inadvertently hurt some of these teams who draw teams that underachieved during the season. Take Washington as an example from last season–they were a top 10 team by some rankings of the best teams but underachieved and drew a 7-seed. A team that earned a 2-seed would actually be better off as a 3-seed drawing an easier 6-seed as opposed to being slotted across from the Huskies.

So, this week, I offer two alternative S-Curve systems: Continue reading

Posted in College Basketball, descriptive, March Madness, predictive, team evaluation | Tagged , , | 1 Comment

The Achievement S-Curve: 1/23/2012

I will try to update this every week, though I won’t provide nearly as much commentary. For an introduction and explanation, try these three posts.

This week, I’ll get right to it with the chart. As always, the full S-Curve with additional information can be found here.

This week, I’ll tie in some comments about this week’s ASC as examples for what are the main differences between my ratings and ESPN’s Bracketology by Joe Lunardi (and other similar “bracketology” predictions). Continue reading

Posted in College Basketball, descriptive, March Madness, team evaluation | Tagged , , | Leave a comment

The Achievement S-Curve: 1/16/2012

Last year, I introduced the Achievement S-Curve. The idea behind it was that teams should be rewarded for their season based on their wins and losses and the strength of their schedule. This is in opposition to the other camp of evaluating and seeding teams for the tournament, where teams are judged based on who is the “best” regardless of record. I discuss this dichotomy in further detail in this post from last year.

Methodology

The result was my Achievement S-Curve, and I’m bringing it back for a second go-round this year. I explained the methodology last year, but I’ll give a quick summary here: Continue reading

Posted in College Basketball, descriptive, March Madness, team evaluation | Tagged , , | Leave a comment

The Colts Decision

As a Colts fan since the Harbaugh days, I remember the last time the Colts had the number 1 pick. The decision then, however, was much different. Indianapolis was definitely drafting and keeping a QB, it was just a matter of who: Peyton Manning or Ryan Leaf. Bill Polian made the right choice and the Colts have benefited with one of the best sustained runs of excellence in NFL history.

Now, the Polian era has ended and his replacement will decide if the Manning era has ended as well. It’s a much different decision than the one 14 years ago. Let’s lay out the particulars of this Colts decision:

  1. Peyton Manning–arguably the best QB in NFL history–has missed the season after his 2nd and 3rd neck surgeries in 2 years and will be 36 next season.
  2. Manning is due a large bonus before next season, so the Colts have a decision to make this offseason about cutting or keeping him.
  3. The Colts have the #1 pick, and this year’s draft features Andrew Luck who many consider the best QB prospect since Peyton Manning himself or John Elway.
  4. The NFL instituted a slotting system for the draft starting last year. Cam Newton, the 2011 top overall pick, made less than half of 2010 #1 pick Sam Bradford. This makes the #1 pick even more valuable.

As I see it, the Colts have three choices: (1) keep Peyton Manning and trade the pick, (2) draft Andrew Luck and trade or cut Peyton Manning, or (3) keep both Peyton Manning and Andrew Luck. Let’s start with #3: Continue reading

Posted in decision making, Football, player evaluation, team evaluation | Tagged , , , | 4 Comments

BCS Series: Review of Colley ratings

For those that have read the first five installments of my BCS Ratings review, you’ll notice one major theme: nobody publishes their full methodology for how they calculate their ratings. Many of them are a “black box” where the inputs go into, some magic happens, and the output comes out. Well, the final review is of the Colley Matrix rating system and he publishes his entire methodology. Finally! Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , , | 4 Comments

BCS Series: Review of Massey ratings

The Massey ratings have been around since December of 1995, according to his site. The explanation he lists is actually for his rankings that include scoring margin, and not those that are used in the BCS (which can’t use score margin).

However, perhaps we can derive some understanding of Massey’s BCS ratings if they are calculated similarly to his other ratings. Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , | 3 Comments

BCS Series: Review of Wolfe ratings

Continuing with my review of BCS computer rating systems, the 4th of the 6 systems in my series is Dr. Peter Wolfe’s ratings.

On his site, Wolfe only gives a brief explanation:

We rate all varsity teams of four year colleges that can be connected by mutual opponents, taking note of game locations….The method we use is called a maximum likelihood estimate.  In it, each team i is assigned a rating value πi that is used in predicting the expected result between it and its opponent j, with the likelihood of i beating j given by:

 π/ (πi + πj)

The probability P of all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game.  The rating values are chosen in such a way that the number P is as large as possible.

First thing to note is that Wolfe rates all teams from FBS through Division III and even NAIA. He includes all games between any two varsity teams at any level. Other systems, like Sagarin, only rate Division I teams. Some only rate the FBS teams. I am not sure any one method is more “right” than the others, but it is odd that the BCS allows different systems to rate different sets of teams. Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , | 5 Comments

BCS Series: Review of Anderson & Hester ratings

In the third installment of my review of the BCS computer rankings, I will take a look at the ratings of Anderson and Hester. For starters, they have a great tagline on their website: “showing which teams have accomplished the most”. For those of you that have been following, you know my stance on how teams should be judged for inclusion to the BCS title game and this fits perfectly.

Anderson and Hester don’t give many details about their system, but they do highlight four ways in which they believe their ratings to be distinct. Let’s take them one by one. Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , | 24 Comments

BCS Series: Review of Billingsley ratings

Next up in my review of the computer ranking systems in the BCS is Richard Billingsley. He gives a much more detailed explanation of his ratings on his website: read them here. I will pull out pertinent parts of his explanation and comment. Let’s start with his summary.

 I guess in a sense, my rankings are not only about who the “best team” is, but also about who is the “most deserving” team.

This is a decent start. As I have touched on before, I believe that postseason play–whether it be the BCS, NCAA Tournament, or NFL playoffs–should be a reward for the most deserving as opposed to the “best” teams. However, people get into trouble when they try to satisfy both types of ratings: predictive and descriptive. By straddling the line, ratings suffer from trying to do too many things. Focusing on answering just one question will provide the best, purest, most useful answer. Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , | 6 Comments

BCS Series: Review of Sagarin ratings

Jeff Sagarin produces some of the most respected ratings, not just for college football but for the NBA, NFL, college basketball, and others. His ratings, found here, include both a Predictor and an Elo Chess rating. The Predictor rating includes margin of victory and is intended to, well you guessed it, predict future games. In other words, it is a measure of the quality of a team. We are concerned with his other rating, the Elo Chess, which is the one used by the BCS. This rating considers only wins and losses, and in Sagarin’s words “makes it very “politically correct”.” Continue reading

Posted in BCS Series, College Football, Football, review, team evaluation | Tagged , , | 1 Comment