February 23rd, 2012 — 9:42pm
Earlier today on CBSsports.com, Matt Norlander wrote an article about the much-maligned RPI. He comes to this conclusion:
If anything else, this chart proves there are far too frequent communication breakdowns with teams across the board, enough so that the RPI goes beyond outlier status and continues to prove what many have known for years: If the RPI was introduced in 2012, it’s hard to reason that it would be adopted as conventional by the NCAA or in mainstream discussion.
Norlander then provides the heart of his argument, a table comparing the RPI to various other basketball ratings: Sagarin (overall), KenPom, LRMC, Massey and BPI. He points out that “Texas, Belmont, Arizona and Southern Miss all have big disparity as well. The largest gaps are UCLA (62 points lower in the RPI) and Colorado State (65 points higher in the RPI).”
The RPI is a rating created to measure what a team has accomplished so far this season based on their record and their strength of schedule. It is a descriptive rating. LRMC, Massey, BPI, and Sagarin are predictive ratings at their core (though some are even worse, a random combination of descriptive and predictive). Comparing the RPI to these ratings and concluding that because it doesn’t match, it is flawed, is itself a terribly flawed argument. Of course it doesn’t match, it is trying to measure a completely different thing. I agree, the RPI is flawed, but not because of this.
Norlander’s article should have been about his preference for selecting and comparing teams based on their true strength instead of their resume, and not about the quality of the RPI which has little to do with this debate. Even if the RPI perfectly did it’s job (of measuring how much to reward teams for their performance on the season), it would have failed the test in this article. Let’s take a deeper look. Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, review, team evaluation
February 23rd, 2012 — 7:34pm
This post was a 2-part guest post at TeamRankings.com. Here are Part 1 and Part 2.
With a month left in the season, most of college basketball is focused on who’s in and out of the tournament. Those teams near the cut line are on the Bubble, while teams that are securely in the tournament are Locks with little worry of falling out of the bracket and seemingly little left to gain with their dance cards punched.
Turns out, there’s still plenty to play for, especially at the top. As every fan knows, the NCAA Tournament is seeded from 1 to 16 in four separate regions. The top seeds are rewarded by being placed at locations close to home, protected from a home-crowd disadvantage, and–most importantly–pitted against easier opponents. That last point is even more pronounced than one might expect. Obviously every team wants to move up a seed line, but the importance of climbing each rung of the seeding ladder might surprise. Continue reading »
Comment » | College Basketball, March Madness, predictive, talent distribution, team evaluation
February 21st, 2012 — 9:40pm
The Philadelphia Eagles finished the season 8-8, but outscored their opponents by 68 points, the 5th-best mark in the NFC. Seven of their 8 losses were by one score or less, and they finished the season hot on a 4-game winning streak. Most rankings that try to determine how strong a team truly is had the Eagles as high as the 4th or 10th or 7th best team in the entire NFL. The team was filled with talented players like Michael Vick, LeSean McCoy, and Nnamdi Asomugha, among others, and easily passed the “eye” test as a good team capable of beating anyone at their best. In addition, two of the team’s losses came with their star QB sidelined and a third loss came when star WR DeSean Jackson was benched. When it came time to select the NFC’s playoff teams, the committee decided that Philadelphia was definitely one of the 6 best teams, and left out the Atlanta Falcons despite their 10-6 record as well as seeding the Eagles ahead of the 9-7 Giants, the winners of the Eagles’ division.
I get the feeling that if this were to happen, fans would be outraged. However, this is exactly the type of thing that happens every year in the NCAA Tournament selection process. Continue reading »
Comment » | College Basketball, descriptive, Football, March Madness, team evaluation
February 18th, 2012 — 7:05pm
One cool thing we can do with the rest-of-season simulation is look at the effect that the outcome of a specific game can have. As an example, take today’s headline BracketBusters game between Murray State and St. Mary’s that just finished. Entering today, the Racers had a 92.9% chance to get an at-large bid should they fail to win their conference tournament. With a loss today, that would have dropped to 88.6%, but Murray State was able to pull out the big victory at home and–at least according to the Achievement S-Curve–punch their ticket to the Big Dance.
Comment » | College Basketball, March Madness, Quick Slant, simulation, team evaluation
February 14th, 2012 — 12:08am
Quick update on the Achievement S-Curve.
First, the bracket and the full ASC data here:
The ASC is converging with Bracketology. Besides differences in doling out the automatic bids, just two of Lunardi’s at-large teams were not in my bracket–BYU and Arizona–and they were my very first two teams below the cut line. One of the two spots went to Nevada since I give the WAC auto bid to New Mexico St. The other went to Northwestern, who I have all the way up at a 9-seed. I think the Wildcats are not being given due credit for their tough schedule, which I have ranked 10th toughest. Since I touched on Northwestern last week, I’ll use another Big Ten team that I believe is underseeded as this week’s example: Illinois.
Bracketology has the Illini as a 12-seed while the ASC sees their resume as worthy of a 6-seed. For comparison’s sake, since I have already picked on Florida in the past, I’ll spare them and go after their in-state rival Florida St. (#11 in the ASC, #6 in Bracketology). I’m going to debut a new tool to help display a team’s schedule. These graphs show a team’s schedule from toughest game to easiest. Green bars show wins while red bars indicate losses. The gray bars represent the opposite score for that game should the outcome have been flipped. This allows us to see exactly how a team is arriving at its score.
Here are the graphs (no cool name yet, but I should come up with one) for Illinois (on top) and Florida St. on the bottom (try clicking twice to view them larger). Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, team evaluation
February 10th, 2012 — 12:08am
Around this time of year, there’s lots of talk about who’s in and who’s out and who’s on the bubble. Plenty of chatter about what may or may not get your team into the Big Dance. Tons of discussion of big wins and bad losses.
I’ve spent the past few weeks posting my Achievement S-Curve, an objective, reward-based system of who should be in the tournament if the season ended today. But the season is not going to end today, despite how much Murray State may have wanted it to end before they took their first loss to Tennessee State tonight. It’s interesting and fun to play committee member and decide the fates of 345 college basketball teams more than a month before the actual brackets are released. But what we really should be interested in is what is going to happen the rest of season.
It’s cool to see that overachievers like Murray State and San Diego State have climbed into the top half of the bracket, but if we know they’re likely to come down to earth a little bit that’s much more insightful. Conversely, underachievers like Alabama or Saint Louis might be on the bubble right now but if they’re going to work there way off of it and into the bracket, we shouldn’t really care too much.
The Solution: Simulation Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, simulation, team evaluation