Category: descriptive


Grading the Selection Committee’s In-Season Preview

February 11th, 2017 — 11:03pm

Today, the NCAA Selection Committee put out their first ever in-season preview, releasing the current top 16 if the season were to end today. Let’s see how they did.

First, here is their s-curve alongside my own Achievement Rankings, ESPN’s Strength of Record rankings, and ESPN’s current bracketology seeds.

Continue reading »

Comment » | College Basketball, descriptive, March Madness, review, team evaluation

The Silliness of Bracketology

February 23rd, 2016 — 1:05am

We’re less than one month from Selection Sunday, which means the burgeoning field often called Bracketology is in full swing. Bracketology has taken on some broader meanings over the years, but it most often refers to predicting the selection and seeding of teams in the NCAA Tournament bracket. ESPN’s Joe Lunardi (aka “Joey Brackets”) has made a name and a living on his projections and there are now so many bracketologists that there is a site called The Bracket Matrix that collects all of them (dozens and dozens), displays them in a matrix, and grades them when the final bracket is released.

As a March Madness lover, I am a fan of most things involving the tournament and endorse almost anything that brings interest and discussion to the event. While predicting the NCAA Tournament field certainly falls into that category–and I myself have dabbled in my version of it–there are some aspects of the current state of Bracketology that range from misguided to downright silly.

Continue reading »

Comment » | College Basketball, descriptive, March Madness, review

The Achievement S-Curve – 2013 Final

March 18th, 2013 — 9:51pm

Selection Sunday 2013 is in the books. Time to release the final Achievement S-Curve of 2013 and see how it compares to the actual bracket.

The 2013 Achievement S-Curve (click twice to embiggen):

Achievement S-Curve 130318 Continue reading »

2 comments » | College Basketball, descriptive, March Madness, predictive, review, team evaluation

What’s Wrong with the Hawkeyes?

March 7th, 2013 — 12:26am

Amazingly, the Achievement S-Curve matches up well with the traditional Bracketology projections out there such as the one at ESPN. The only current differences between my ASC and ESPN’s Bracketology occur at the very end of the bracket. All of Lunardi’s tournament teams are at least in my first 6 teams out of the bracket and all of my tournament teams are at least in his first 4 out. Except one.

All year, the biggest discrepancy between the Achievement S-Curve and traditional s-curves has been Iowa. Until recently, they weren’t even among those considered for the bracket. They have now snuck their way not into the First Four Out or the Next Four Out, but as the Ninth Team Out and last team considered for ESPN’s bracket. Now, the Hawkeyes are no perfect team, and what I love about the NCAA Tournament as opposed to the BCS is that there are no real “snubs”. If you’re not one of the top 34 non-automatic qualifiers, you don’t have much of a gripe.

That said, we can still try to pick the 34 most deserving at-large teams and Iowa certainly appears to be in the heart of that discussion. The Hawkeyes are 19-11 against what I measure as the 10th toughest schedule in the country. However, teams that appear much more flawed are listed ahead of them. Let’s take a look at a few of the issues that are influencing this misperception. Continue reading »

Comment » | College Basketball, descriptive, March Madness, team evaluation

Bid Stealers – 2013 Conference Tournament Edition

March 6th, 2013 — 10:16am

Earlier this season, I looked at those teams who could potentially shrink the at-large pool by getting upset in their conference tournament. These potential “Bid Stealers” are generally teams from mid-major conferences where they are the only viable at-large candidate. When they don’t win the conference tournament, that automatic bid is going to a team that otherwise would have no chance of going dancing and therefore they are stealing a bid from another at-large candidate.

As we enter Conference Tournament season, it’s time to refresh that look at this year’s potential Bid Stealers. My process for determining auto and at-large bids relies on a simulation of the remainder of the season followed by an application of my Achievement S-Curve to determine NCAA Tournament bids. My Achievement S-Curve (ASC) is based on what I think the criteria for selection should be, and is not trying to mimic the selection committee.

Here are this year’s potential Bid Stealers: Continue reading »

2 comments » | College Basketball, descriptive, March Madness, simulation, team evaluation

The Achievement S-Curve: 2/21/2013

February 22nd, 2013 — 12:24am

It’s time to re-introduce the Achievement S-Curve for the 2013 season. For those of you that are new, I’ll give a quick recap in this post but check out previous posts that go into more detail about the system (try this and this and this for starters).

The Achievement S-Curve is a descriptive rating system that attempts to rate teams based on what they have accomplished. It is a subtle yet important difference from a predictive rating system. While a predictive system attempts to answer the question “who would win if these two teams played today?” a descriptive system answers “who has accomplished the most in the games they’ve already played?”.

An example is probably the best way to demonstrate the differences between the two systems. Let’s take a real-life example. My predictive rating system says that New Mexico is the 33rd best team in the country. That is, there are 32 teams I’d favor over the Lobos, but I’d pick them to beat every other team. Pitt, meanwhile, is the 7th best team. Only six teams in the nation would be favored over the Panthers today. However, New Mexico is 22-4 against the 29th-hardest schedule thus far while Pitt hasn’t fared as well with a  20-7 record against a very similar schedule (24th-most difficult). It is clear that New Mexico has “achieved” more thus far this season than Pitt has. The Lobos have earned a higher seed than Pitt, despite the fact that Pitt would beat them more times than not. Continue reading »

Comment » | College Basketball, descriptive, March Madness, simulation, team evaluation

Evaluating QBs: Peyton Manning is a Better Playoff Quarterback than Tom Brady

February 3rd, 2013 — 2:27pm

Based on their respective records this might sound crazy. Brady has three rings, five total Super Bowl appearances, and a record 17 playoff victories. Manning on the other hand: a below .500 playoff record, just one Super Bowl ring, and a record eight “one-and-outs”. How could anybody in their right mind choose the latter over the former? It’s amazing what a little perspective can do. Let’s start from the beginning.

(If you haven’t read the first three parts of this series, they introduce and explain all of the concepts used here: Part I, Part II, and Part III.) Continue reading »

27 comments » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, talent distribution, team evaluation

Evaluating QBs: The Truth Behind QB Records

February 1st, 2013 — 1:08am

*UPDATE: I’ve temporarily changed the blog theme so that the tables in this post will be sortable and searchable.*

With the tedious boring stuff out of the way (if you missed the boring parts, here is boring part 1 and part 2), it’s time for the payoff. I’ll post some results and comment on some of the more interesting findings.

First, the caveats, the fine print. All games from 2000-2012 are included, regular season is assumed unless otherwise noted. From last post, we defined the “QB of record” for each game; that is instead of the starting QB we’ll use the QB who had the most dropbacks for his team in each game (dropbacks = pass attempts + sacks). Again from the previous posts, we defined different phases of the game, which we’ll measure by Expected Points Added (EPA)–despite having my own expected points model, I decided to borrow Brian Burke’s more well-known EP model for this series. Those phases are defense, special teams and offense; most of the time here we’ll be dividing offense into two parts: QB EPA, which are plays where the QB is the passer or rusher, and Non-QB EPA which is all other offensive plays. While part 1 showed that QBs have control over QB EPA but little to no influence over Non-QB EPA, Defensive EPA, or Special Teams EPA that should not be confused with QBs having all control over QB EPA. While that is heavily influenced by the quarterback, receivers, lineman, running backs, the opposing defense, etc. all have some impact as well on these plays.

With the disclaimers out of the way, let’s dive right in. Continue reading »

Comment » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, talent distribution, team evaluation

Evaluating QBs: It’s All About Context

January 31st, 2013 — 12:13am

In part 1 of my Evaluating QBs series, we looked at what makes teams win and which of those things quarterbacks have control over. While wins can be useful to separate quarterbacks, that is only because they are correlated with the underlying factors that explain wins. Once we separate out and control for those factors, QB wins provide no further information.

Now that we have shown that QBs have some control over the plays they are directly involved in but no influence over other facets of the game–defense, special teams, and other offensive plays–we can now look at how many wins we’d expect each player to have based only on what they have control over.

We can get at this two ways: directly and indirectly. The direct way is to look at how often quarterbacks win based on their EPA (again, using Brian Burke’s Expected Points from Advanced NFL Stats). The indirect way is to look at how often quarterbacks win based on the EPA of everything else, what I’ll call “support”. That is, the sum of the EPA of the quarterback’s team defense, special teams, and non-QB offensive EPA. Continue reading »

2 comments » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, talent distribution, team evaluation

Evaluating QBs: Why Not Wins?

January 27th, 2013 — 12:35am

Full disclosure: I’m a Peyton Manning fan. If you can’t get past that, stop reading now. Still there? Good, welcome.

Following the Broncos recent loss to the Ravens (and the subsequent Patriots loss), there has been a new wave of the old Manning vs. Brady argument. Clutch vs. choke. Winner vs. can’t-win-the-big-one. Add in another playoff loss for Matt Ryan and a couple big wins for Joe Flacco, and the debate is raging like never before.

If you’re reading this, you’ve probably at least touched on the subject this January. I have. The debate always seems to deteriorate into emotional arguments filled with snarky retorts and anecdotal “evidence”. Tuck Rule game is countered with the Helmet Catch. The Flacco Prayer is answered with the Tracy Porter pick six. And on and on. And on. Every quarterback has been lucky, and every quarterback has been unlucky. Everyone can bring up some argument to support their claim. Without looking at the entire picture, we’ll never reach a valid conclusion. There has to be a better way.

A Clean Slate Continue reading »

Comment » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, predictive, talent distribution, team evaluation

Back to top