As I laid out in my introductory post, I am laying out my conference tournament predictions in order to compare them to other predictions out there. The two that I know of are Ken Pomery’s Log5 predictions and TeamRankings predictions.

In that first post, I proposed a sum of squared errors measure to score each system. After talking with multiple people much smarter and more well-versed in this area than me, I settled on using a logarithmic scoring rule. One way to grade each system’s predictions would be to apply the log to the “winning” probability of each game (for instance if the winning team was given a 75% chance to win, the score for that game would be the log(.75); if the other team were to win, the score would instead be log(.25)). However, each set of predictions simply gives the probability of each team advancing to each round, so we don’t have individual game probabilities. As a replacement, I decided to grade each team based on the predicted odds that they would go exactly as far as they did. Say Team A won their 1st round and quarterfinal games but lost in the semifinals. If the prediction said they had a 75% to make it to the semifinals and a 50% chance to win in the semifinals, then the chance that they win the quarters but lose the semis is 75% – 50% = 25%. Thus, the score for Team A is log(.25). This double counts games, but it double counts every game, so there shouldn’t be a bias.

That’s enough boring math, let’s take a look at some preliminary results, through tonight’s (Wednesday) games. The higher the score (i.e. the less negative it is), the better.

Total Logarithmic Score

Ken Pomeroy: -152.53

Predict the Madness (me): -152.65

Team Rankings: -152.98

The numbers appear to be very close, closer than I thought they’d be. We’re about half way through the conference tournament games, so we’ll see if this holds throughout the rest of the games. One caveat: Liberty, who won the Big South tournament as an ultimate longshot, was given a 0% chance by my system. Unfortunately, I wasn’t thinking, and I used the results of my simulation (1,000 sims) instead of fully calculating out the probabilities. I went back and estimated what I would have given them to win the whole thing, and put the number at 0.2%. Overall, I should have used this method for all of my predictions as the sim just injects noise into the predictions, but hey, you live and you learn. Taking out that game completely actually leapfrogs my system into 1st place as I lost nearly a full “error” point to each other system in that game alone, by far the biggest single-game difference.

I’ll check back in after this weekend once all tournaments are complete to see how each system did, with maybe a quick update between now and then. Until then, enjoy the games.

Maybe i am missing something, but this scoring system seems to ignore a lot of the predictions, since only 1 or 2 round odds values are used for each team’s score. For example, say a team lost in the first round, and each system saw that initial game as a toss up. They’d all get the same score, even if one system had them with a much lower chance to advance deep into the bracket. (Obviously that scenario seems unlikely, but I think it illustrates my point.)

I see what you’re saying, David. I was starting from the point of trying to mimic a scoring system if we had each individual game predictions. I think what you’re saying is to grade each team’s probability of reaching each round. So in your scenario, two teams that both win their 1st game at the same rate, but Team A is predicted to advance further more often. My system would treat them equally, but you’re saying we should punish the Team A prediction more severely.

I’ll take a look at that tonight and see how it looks.

Yes, exactly.

If you want to have each round be like a “game” you could use conditional odds to advance past each round.

For a team with advance-to odds of:

75% 50% 25% 10%

Their individual round conditional win odds would be:

75%

67% (50/75)

50% (25/50)

40% (10/25)

Hmm, I suppose this doesn’t fix the problem of not penalizing projections of a team to go deep, as you can still only look at the results for rounds that really happened.

Ok, so here’s what I’m thinking. I still like the way I’ve done it as basically for each team you can think of the question as: How likely is it they make it exactly to this round? You essentially have 100% to divide up for each team.

The more I think about it, in your scenario, isn’t the chance that Team A is predicted to advance further already taken into account? Simple example: Team A and B are both 25% likely to win in Rd1, but Team A is 15% likely (total) to win in Rd2 while Team B is only 10%. However, both lose in Rd1 and Teams C and D win in Rd2. Team C’s prediction is lower because we gave 15% to Team A, while Team D’s prediction is higher since we only gave 10% to Team B. Does that make sense?