The Blindness of the Blind Resume

I love the spirit of the blind resume. I hate the execution.

With Selection Sunday just hours away, you will undoubtedly be inundated with blind resumes comparing multiple teams and asked to decide which team is in and which is out, or which team should be seeded higher. I like the sentiment behind these: strip away the name of the team, their history, their media coverage, their conference affiliation and focus solely on what they’ve accomplished this season. The problem is that the blind resumes focus on the wrong information, making the comparisons flawed.

Why the Blind Resumes are Flawed

A typical blind resume looks something like this:

 Team ATeam B
vs. Top 501-23-8
Last 12 Games11-17-5

This graphic will pop up on the screen while you’re asked to decide between the unnamed teams. After a few seconds, the teams are revealed and you’re shocked that you chose some mid-major team over a major conference power. The problem is, the information provided in the blind resumes are biased. Here’s why:

Problem #1) Showing RPI and SOS together: In the table above, you see that Team A has the better RPI but worse strength of schedule compared to Team B. Often commentators will treat these equally, saying something along the lines of: “Team A has the better RPI, but Team B has played the tougher schedule.” The problem with this thinking is that SOS is a component of RPI. RPI is a combination of a team’s record and it’s SOS. Let’s assume that RPI is a perfect composite of record and SOS (it’s not, but it’s also not as flawed as you might think). If that were the case, then the SOS would be completely useless in the comparison above, since it is already factored into RPI. Even if you don’t accept that SOS is perfectly accounted for in RPI, it should be weighted far less than RPI since at least part of SOS is already in there. And even then, you have to decide in which way SOS is biased against in RPI. Are teams with tough schedules too high or too low in RPI? It could very well be that Team B above is actually given too much credit in the RPI for its SOS so in fact you would want to adjust them down for having a better SOS than Team A.

So SOS is nearly useless. If anything, you need to at least include W-L as well, but then you might as well leave RPI off and let everyone weight those two things (W-L record and SOS) however they see fit.

Problem #2) Record vs. RPI Top 50: Many of the same arguments from SOS hold here as well. Basically, record vs. RPI Top 50, similarly to SOS, is a subset of a team’s RPI and already accounted for. If anything, it should only be used to slightly bump up or down a team’s resume and weighted very little accordingly.

However, there are a couple other problems. First, the difficulty of the game isn’t captured correctly. Mainly this is because there’s no accounting for the location of these games. As I’ve discussed in the past, location is a huge determinant in the difficulty of a game. A home win versus the #40 team is about as impressive as a road win against the #80 team. However, despite those wins being equally impressive, only one counts as a Top 50 win. A smaller consideration that is more a matter of preference is that I prefer to measure a team’s opponents by their true team strength (something like KenPom or Sagarin’s predictor rating) as opposed to a descriptive rating like RPI.

Second–and this is a bigger problem–not all Top 50 games are created equally. The difference between a win or loss against #5 (think North Carolina) and against #50 (think BYU or West Virginia) is enormous. Northwestern is often cited for their 1-10 record against the RPI Top 50, but 7 of those losses are against the Top 25, and 5 of those 7 are against the top 10. Miami (FL), by contrast, is 2-10 against the Top 50, but 5 of those losses are to NC State, Ole Miss, or Florida State and only the two losses to UNC are top 10. So whose Top 50 performance is really better?

In addition, teams just outside the top 50 don’t count at all. ESPN’s Bubble Watch had this comment in NC State’s profile: “Things are looking up, and not just because of Friday’s win. Texas and Miami also managed to sneak into the RPI top 50 as of Friday morning, which means the Wolfpack — who entered the weekend 0-8 against top-50 teams — now have four such wins (including a sweep of Miami) to their credit.” So Texas and Miami move from just outside to just inside the top 50. This should make a miniscule difference but how much better does 3-8 look than 0-8 on the blind resume?

Problem #3) Record in Last 12 Games: The NCAA finally removed this from the official criteria for selection teams, and rightfully so. Drexel’s 25-1 stretch before their conference championship game loss to VCU has been much-talked about. Of course, they went 2-4 before that. Does it really matter when their 6 losses came? I don’t think so and neither does the committee any more.

How to fix it

When I see the blind resumes, I almost always simply choose the team with the better RPI. It’s not perfect, but the rest of the information provided is of no added benefit and without any better information given to me, it’s the best decision to make.

I am on record as saying that the selection decision should be very simple. My Achievement Ratings simply determine how many losses we should expect a team to have against its schedule and then compares that to their actual record. To me, that’s all you need, two pieces of information. If you’d like to add in some bonuses for big wins or conference championships, that’s fine as well, as long as we’re consistent about it.

For now, I’d be content if they just showed RPI, or if they left RPI off completely and just showed the components of a team’s resume. In the long run, I hope we can shed a little bit of light on these blind resumes because they can be a powerful tool if used correctly.

This entry was posted in College Basketball, descriptive, March Madness, team evaluation and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published.