March 3rd, 2016 — 11:52pm
Three years ago, I compiled predictions for the conference tournaments from three sources–my own, Ken Pomeroy, and Team Rankings. When the dust settled, Team Rankings had narrowly edged out KenPom for the title as I lagged behind a distant third.
I didn’t get around to it in 2014 (though perhaps I can find time to go back and gather predictions from that season), but last year I did track things. Unfortunately, I’m just now getting around to posting it. The results were the same, though this time, Team Rankings won comfortably over KenPom and my own predictions. I’ve posted the full spreadsheet on Google docs, which you can find here. I discuss the scoring system in this post. Since we are posting advancement odds, we don’t have predictions for each individual matchup. Instead, predictions are essentially a rolled up version of all possible matchups. To score them, I use the log of each team’s predictions to get exactly to the round they did. For instance, my predictions for Montana in the Big Sky tournament were 81%/61%/43%, meaning an 81% chance of winning the 1st round and advancing to the semifinals, 61% of reaching the final, and 43% of winning the title. Another way of looking at it is that Montana had a 19% chance to lose in the 1st round (that’s 100% minus the 81% chance to win in the 1st round), a 20% chance of winning one game and then losing in the semis, an 18% chance of winning twice and losing in the final, and, of course, the 43% chance to win it all. Those are the probabilities that are scored.
This year is under way. If I get around to it, I may post the predictions for each of the three systems, but either way, I’ll be back in a couple weeks with the final results. Good luck to Ken Pomeroy and Team Rankings; I hope to be able to at least climb out of the cellar this year.
Comment » | College Basketball, Conference Tournament predictions, March Madness, predictive, review, team evaluation
February 23rd, 2016 — 1:05am
We’re less than one month from Selection Sunday, which means the burgeoning field often called Bracketology is in full swing. Bracketology has taken on some broader meanings over the years, but it most often refers to predicting the selection and seeding of teams in the NCAA Tournament bracket. ESPN’s Joe Lunardi (aka “Joey Brackets”) has made a name and a living on his projections and there are now so many bracketologists that there is a site called The Bracket Matrix that collects all of them (dozens and dozens), displays them in a matrix, and grades them when the final bracket is released.
As a March Madness lover, I am a fan of most things involving the tournament and endorse almost anything that brings interest and discussion to the event. While predicting the NCAA Tournament field certainly falls into that category–and I myself have dabbled in my version of it–there are some aspects of the current state of Bracketology that range from misguided to downright silly.
Continue reading »
Comment » | College Basketball, descriptive, March Madness, review
March 18th, 2013 — 9:51pm
Selection Sunday 2013 is in the books. Time to release the final Achievement S-Curve of 2013 and see how it compares to the actual bracket.
The 2013 Achievement S-Curve (click twice to embiggen):
Continue reading »
2 comments » | College Basketball, descriptive, March Madness, predictive, review, team evaluation
March 14th, 2013 — 1:17am
As I laid out in my introductory post, I am laying out my conference tournament predictions in order to compare them to other predictions out there. The two that I know of are Ken Pomery’s Log5 predictions and TeamRankings predictions.
In that first post, I proposed a sum of squared errors measure to score each system. After talking with multiple people much smarter and more well-versed in this area than me, I settled on using a logarithmic scoring rule. One way to grade each system’s predictions would be to apply the log to the “winning” probability of each game (for instance if the winning team was given a 75% chance to win, the score for that game would be the log(.75); if the other team were to win, the score would instead be log(.25)). However, each set of predictions simply gives the probability of each team advancing to each round, so we don’t have individual game probabilities. As a replacement, I decided to grade each team based on the predicted odds that they would go exactly as far as they did. Say Team A won their 1st round and quarterfinal games but lost in the semifinals. If the prediction said they had a 75% to make it to the semifinals and a 50% chance to win in the semifinals, then the chance that they win the quarters but lose the semis is 75% – 50% = 25%. Thus, the score for Team A is log(.25). This double counts games, but it double counts every game, so there shouldn’t be a bias. Continue reading »
5 comments » | College Basketball, Conference Tournament predictions, predictive, review, simulation, team evaluation
March 5th, 2013 — 9:24pm
Conference tournaments got under way today with the 1st Round of the Big South and Horizon tournaments. This year I’m going to put my predictions on “paper” and compare them to some other predictions out there, notably Ken Pomeroy’s Log5 predictions and Team Rankings conference tourney predictions. If you know of any other posted predictions out there, let me know.
My predictions as well as KenPom and TeamRankings give the percentage chance of each team advancing to each round of every conference tournament. To grade each set of predictions, I’ll use the sum of squared error for each game winner. For example, let’s take Ken Pomeroy’s prediction of Charleston Southern in the Big South tournament:
Qtrs Semis Final Champ
1S Char Southern 100 77.2 50.5 31.6
So Charleston Southern has a 100% chance of reaching the quarterfinals (they have a bye), a 77% chance of reaching the Semifinals, 51% chance at making the final, and a 32% chance of grabbing the conference’s automatic bid. If Charleston Southern were to make the semifinals, for instance, KenPom’s prediction would receive (1 – .772)^2 “error points”, which comes out to .052. The fewer the error points, the better the predictions did. If, for instance, Longwood reaches the semis, KenPom’s ratings would suffer for their 2% prediction of that happening. That would give (1 – .020)^2, or .960 error points. In fact, Longwood did pull of the 1st Round upset and is just one game from reaching the semis.
It’s finally March and I see no reason why we need to wait for Selection Sunday to fill out some brackets when we have 31 perfectly good conference tournaments to predict. Let the Madness begin.
Comment » | College Basketball, Conference Tournament predictions, predictive, review, team evaluation
August 29th, 2012 — 12:17am
As we gear up for another NFL season kicking off in just over a week, there will be lots of discussion of Super Bowl contenders and playoff predictions. Which teams will improve and which will decline. One of the big and often over-looked factors in these exercises is a team’s strength of schedule.
Often, when the schedule is released, you’ll see attempts at determining the most difficult schedules like this one that use the previous season’s records to determine the quality of the opponent for each game. While this is a reasonable starting point, it definitely has its flaws.
What’s wrong with traditional Strength of Schedule measures? Continue reading »
Comment » | Football, predictive, review, team evaluation
February 23rd, 2012 — 9:42pm
Earlier today on CBSsports.com, Matt Norlander wrote an article about the much-maligned RPI. He comes to this conclusion:
If anything else, this chart proves there are far too frequent communication breakdowns with teams across the board, enough so that the RPI goes beyond outlier status and continues to prove what many have known for years: If the RPI was introduced in 2012, it’s hard to reason that it would be adopted as conventional by the NCAA or in mainstream discussion.
Norlander then provides the heart of his argument, a table comparing the RPI to various other basketball ratings: Sagarin (overall), KenPom, LRMC, Massey and BPI. He points out that “Texas, Belmont, Arizona and Southern Miss all have big disparity as well. The largest gaps are UCLA (62 points lower in the RPI) and Colorado State (65 points higher in the RPI).”
The RPI is a rating created to measure what a team has accomplished so far this season based on their record and their strength of schedule. It is a descriptive rating. LRMC, Massey, BPI, and Sagarin are predictive ratings at their core (though some are even worse, a random combination of descriptive and predictive). Comparing the RPI to these ratings and concluding that because it doesn’t match, it is flawed, is itself a terribly flawed argument. Of course it doesn’t match, it is trying to measure a completely different thing. I agree, the RPI is flawed, but not because of this.
Norlander’s article should have been about his preference for selecting and comparing teams based on their true strength instead of their resume, and not about the quality of the RPI which has little to do with this debate. Even if the RPI perfectly did it’s job (of measuring how much to reward teams for their performance on the season), it would have failed the test in this article. Let’s take a deeper look. Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, review, team evaluation
December 3rd, 2011 — 2:59pm
For those that have read the first five installments of my BCS Ratings review, you’ll notice one major theme: nobody publishes their full methodology for how they calculate their ratings. Many of them are a “black box” where the inputs go into, some magic happens, and the output comes out. Well, the final review is of the Colley Matrix rating system and he publishes his entire methodology. Finally! Continue reading »
4 comments » | BCS Series, College Football, Football, review, team evaluation
December 3rd, 2011 — 2:22pm
The Massey ratings have been around since December of 1995, according to his site. The explanation he lists is actually for his rankings that include scoring margin, and not those that are used in the BCS (which can’t use score margin).
However, perhaps we can derive some understanding of Massey’s BCS ratings if they are calculated similarly to his other ratings. Continue reading »
3 comments » | BCS Series, College Football, Football, review, team evaluation
October 26th, 2011 — 10:54pm
Continuing with my review of BCS computer rating systems, the 4th of the 6 systems in my series is Dr. Peter Wolfe’s ratings.
On his site, Wolfe only gives a brief explanation:
We rate all varsity teams of four year colleges that can be connected by mutual opponents, taking note of game locations….The method we use is called a maximum likelihood estimate. In it, each team i is assigned a rating value πi that is used in predicting the expected result between it and its opponent j, with the likelihood of i beating j given by:
πi / (πi + πj)
The probability P of all the results happening as they actually did is simply the product of multiplying together all the individual probabilities derived from each game. The rating values are chosen in such a way that the number P is as large as possible.
First thing to note is that Wolfe rates all teams from FBS through Division III and even NAIA. He includes all games between any two varsity teams at any level. Other systems, like Sagarin, only rate Division I teams. Some only rate the FBS teams. I am not sure any one method is more “right” than the others, but it is odd that the BCS allows different systems to rate different sets of teams. Continue reading »
5 comments » | BCS Series, College Football, Football, review, team evaluation