March 6th, 2013 — 10:16am
Earlier this season, I looked at those teams who could potentially shrink the at-large pool by getting upset in their conference tournament. These potential “Bid Stealers” are generally teams from mid-major conferences where they are the only viable at-large candidate. When they don’t win the conference tournament, that automatic bid is going to a team that otherwise would have no chance of going dancing and therefore they are stealing a bid from another at-large candidate.
As we enter Conference Tournament season, it’s time to refresh that look at this year’s potential Bid Stealers. My process for determining auto and at-large bids relies on a simulation of the remainder of the season followed by an application of my Achievement S-Curve to determine NCAA Tournament bids. My Achievement S-Curve (ASC) is based on what I think the criteria for selection should be, and is not trying to mimic the selection committee.
Here are this year’s potential Bid Stealers: Continue reading »
2 comments » | College Basketball, descriptive, March Madness, simulation, team evaluation
February 22nd, 2013 — 12:24am
It’s time to re-introduce the Achievement S-Curve for the 2013 season. For those of you that are new, I’ll give a quick recap in this post but check out previous posts that go into more detail about the system (try this and this and this for starters).
The Achievement S-Curve is a descriptive rating system that attempts to rate teams based on what they have accomplished. It is a subtle yet important difference from a predictive rating system. While a predictive system attempts to answer the question “who would win if these two teams played today?” a descriptive system answers “who has accomplished the most in the games they’ve already played?”.
An example is probably the best way to demonstrate the differences between the two systems. Let’s take a real-life example. My predictive rating system says that New Mexico is the 33rd best team in the country. That is, there are 32 teams I’d favor over the Lobos, but I’d pick them to beat every other team. Pitt, meanwhile, is the 7th best team. Only six teams in the nation would be favored over the Panthers today. However, New Mexico is 22-4 against the 29th-hardest schedule thus far while Pitt hasn’t fared as well with a 20-7 record against a very similar schedule (24th-most difficult). It is clear that New Mexico has “achieved” more thus far this season than Pitt has. The Lobos have earned a higher seed than Pitt, despite the fact that Pitt would beat them more times than not. Continue reading »
Comment » | College Basketball, descriptive, March Madness, simulation, team evaluation
February 12th, 2013 — 12:21am
With the Super Bowl behind us, it’s time for me to turn my attention to college hoops for a couple months.
As we approach March, it’s all about teams trying to claw their way into the tournament. As you surely know, there are two ways to get into the dance: win your conference or get a coveted at-large berth.
Most of the time, the winner of the conference’s automatic bid has little bearing on other teams. In the Big Ten, if Indiana doesn’t win the bid, Michigan might. Or Ohio State. Or Michigan State. Regardless, those teams were getting in anyway. Conversely, in the traditional one-bid leagues, like the SWAC, it doesn’t matter who wins. The champion is going dancing and the rest of the conference is going home.
But there are the select few who can really ruin a bubble team’s Selection Sunday. The Bid Stealers. These are the teams that have a chance to win an at-large bid, but unlike the power conferences where the alternatives for the auto bid are themselves at-large locks, when these bid-stealers lose its a team that otherwise had no chance to make the tournament that takes the conference’s auto bid. These teams, should they get an at-large bid, are essentially stealing a bid from the at-large pool. (Seth Greenberg, above, is not happy that a bid-stealer took a bid from his Virginia Tech Hokies.) Continue reading »
2 comments » | College Basketball, March Madness
March 10th, 2012 — 4:41pm
About half of the automatic bids are still up for grabs this weekend, but the NCAA Tournament picture is starting to take shape. It’s time for one last Achievement S-Curve update. As always, the full ratings can be found here. All data updated through Friday, March 9th. Click to view bigger.
Let’s take a look at some of the biggest discrepancies and see what we can learn. Continue reading »
Comment » | College Basketball, descriptive, March Madness, team evaluation
March 10th, 2012 — 2:38pm
I love the spirit of the blind resume. I hate the execution.
With Selection Sunday just hours away, you will undoubtedly be inundated with blind resumes comparing multiple teams and asked to decide which team is in and which is out, or which team should be seeded higher. I like the sentiment behind these: strip away the name of the team, their history, their media coverage, their conference affiliation and focus solely on what they’ve accomplished this season. The problem is that the blind resumes focus on the wrong information, making the comparisons flawed.
Why the Blind Resumes are Flawed
A typical blind resume looks something like this: Continue reading »
Comment » | College Basketball, descriptive, March Madness, team evaluation
February 23rd, 2012 — 9:42pm
Earlier today on CBSsports.com, Matt Norlander wrote an article about the much-maligned RPI. He comes to this conclusion:
If anything else, this chart proves there are far too frequent communication breakdowns with teams across the board, enough so that the RPI goes beyond outlier status and continues to prove what many have known for years: If the RPI was introduced in 2012, it’s hard to reason that it would be adopted as conventional by the NCAA or in mainstream discussion.
Norlander then provides the heart of his argument, a table comparing the RPI to various other basketball ratings: Sagarin (overall), KenPom, LRMC, Massey and BPI. He points out that “Texas, Belmont, Arizona and Southern Miss all have big disparity as well. The largest gaps are UCLA (62 points lower in the RPI) and Colorado State (65 points higher in the RPI).”
The RPI is a rating created to measure what a team has accomplished so far this season based on their record and their strength of schedule. It is a descriptive rating. LRMC, Massey, BPI, and Sagarin are predictive ratings at their core (though some are even worse, a random combination of descriptive and predictive). Comparing the RPI to these ratings and concluding that because it doesn’t match, it is flawed, is itself a terribly flawed argument. Of course it doesn’t match, it is trying to measure a completely different thing. I agree, the RPI is flawed, but not because of this.
Norlander’s article should have been about his preference for selecting and comparing teams based on their true strength instead of their resume, and not about the quality of the RPI which has little to do with this debate. Even if the RPI perfectly did it’s job (of measuring how much to reward teams for their performance on the season), it would have failed the test in this article. Let’s take a deeper look. Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, review, team evaluation
February 23rd, 2012 — 7:34pm
This post was a 2-part guest post at TeamRankings.com. Here are Part 1 and Part 2.
With a month left in the season, most of college basketball is focused on who’s in and out of the tournament. Those teams near the cut line are on the Bubble, while teams that are securely in the tournament are Locks with little worry of falling out of the bracket and seemingly little left to gain with their dance cards punched.
Turns out, there’s still plenty to play for, especially at the top. As every fan knows, the NCAA Tournament is seeded from 1 to 16 in four separate regions. The top seeds are rewarded by being placed at locations close to home, protected from a home-crowd disadvantage, and–most importantly–pitted against easier opponents. That last point is even more pronounced than one might expect. Obviously every team wants to move up a seed line, but the importance of climbing each rung of the seeding ladder might surprise. Continue reading »
Comment » | College Basketball, March Madness, predictive, talent distribution, team evaluation
February 21st, 2012 — 9:40pm
The Philadelphia Eagles finished the season 8-8, but outscored their opponents by 68 points, the 5th-best mark in the NFC. Seven of their 8 losses were by one score or less, and they finished the season hot on a 4-game winning streak. Most rankings that try to determine how strong a team truly is had the Eagles as high as the 4th or 10th or 7th best team in the entire NFL. The team was filled with talented players like Michael Vick, LeSean McCoy, and Nnamdi Asomugha, among others, and easily passed the “eye” test as a good team capable of beating anyone at their best. In addition, two of the team’s losses came with their star QB sidelined and a third loss came when star WR DeSean Jackson was benched. When it came time to select the NFC’s playoff teams, the committee decided that Philadelphia was definitely one of the 6 best teams, and left out the Atlanta Falcons despite their 10-6 record as well as seeding the Eagles ahead of the 9-7 Giants, the winners of the Eagles’ division.
I get the feeling that if this were to happen, fans would be outraged. However, this is exactly the type of thing that happens every year in the NCAA Tournament selection process. Continue reading »
Comment » | College Basketball, descriptive, Football, March Madness, team evaluation
February 18th, 2012 — 7:05pm
One cool thing we can do with the rest-of-season simulation is look at the effect that the outcome of a specific game can have. As an example, take today’s headline BracketBusters game between Murray State and St. Mary’s that just finished. Entering today, the Racers had a 92.9% chance to get an at-large bid should they fail to win their conference tournament. With a loss today, that would have dropped to 88.6%, but Murray State was able to pull out the big victory at home and–at least according to the Achievement S-Curve–punch their ticket to the Big Dance.
Comment » | College Basketball, March Madness, Quick Slant, simulation, team evaluation
February 14th, 2012 — 12:08am
Quick update on the Achievement S-Curve.
First, the bracket and the full ASC data here:
The ASC is converging with Bracketology. Besides differences in doling out the automatic bids, just two of Lunardi’s at-large teams were not in my bracket–BYU and Arizona–and they were my very first two teams below the cut line. One of the two spots went to Nevada since I give the WAC auto bid to New Mexico St. The other went to Northwestern, who I have all the way up at a 9-seed. I think the Wildcats are not being given due credit for their tough schedule, which I have ranked 10th toughest. Since I touched on Northwestern last week, I’ll use another Big Ten team that I believe is underseeded as this week’s example: Illinois.
Bracketology has the Illini as a 12-seed while the ASC sees their resume as worthy of a 6-seed. For comparison’s sake, since I have already picked on Florida in the past, I’ll spare them and go after their in-state rival Florida St. (#11 in the ASC, #6 in Bracketology). I’m going to debut a new tool to help display a team’s schedule. These graphs show a team’s schedule from toughest game to easiest. Green bars show wins while red bars indicate losses. The gray bars represent the opposite score for that game should the outcome have been flipped. This allows us to see exactly how a team is arriving at its score.
Here are the graphs (no cool name yet, but I should come up with one) for Illinois (on top) and Florida St. on the bottom (try clicking twice to view them larger). Continue reading »
Comment » | College Basketball, descriptive, March Madness, predictive, team evaluation