February 22nd, 2013 — 12:24am
It’s time to re-introduce the Achievement S-Curve for the 2013 season. For those of you that are new, I’ll give a quick recap in this post but check out previous posts that go into more detail about the system (try this and this and this for starters).
The Achievement S-Curve is a descriptive rating system that attempts to rate teams based on what they have accomplished. It is a subtle yet important difference from a predictive rating system. While a predictive system attempts to answer the question “who would win if these two teams played today?” a descriptive system answers “who has accomplished the most in the games they’ve already played?”.
An example is probably the best way to demonstrate the differences between the two systems. Let’s take a real-life example. My predictive rating system says that New Mexico is the 33rd best team in the country. That is, there are 32 teams I’d favor over the Lobos, but I’d pick them to beat every other team. Pitt, meanwhile, is the 7th best team. Only six teams in the nation would be favored over the Panthers today. However, New Mexico is 22-4 against the 29th-hardest schedule thus far while Pitt hasn’t fared as well with a 20-7 record against a very similar schedule (24th-most difficult). It is clear that New Mexico has “achieved” more thus far this season than Pitt has. The Lobos have earned a higher seed than Pitt, despite the fact that Pitt would beat them more times than not. Continue reading »
Comment » | College Basketball, descriptive, March Madness, simulation, team evaluation
February 12th, 2013 — 12:21am
With the Super Bowl behind us, it’s time for me to turn my attention to college hoops for a couple months.
As we approach March, it’s all about teams trying to claw their way into the tournament. As you surely know, there are two ways to get into the dance: win your conference or get a coveted at-large berth.
Most of the time, the winner of the conference’s automatic bid has little bearing on other teams. In the Big Ten, if Indiana doesn’t win the bid, Michigan might. Or Ohio State. Or Michigan State. Regardless, those teams were getting in anyway. Conversely, in the traditional one-bid leagues, like the SWAC, it doesn’t matter who wins. The champion is going dancing and the rest of the conference is going home.
But there are the select few who can really ruin a bubble team’s Selection Sunday. The Bid Stealers. These are the teams that have a chance to win an at-large bid, but unlike the power conferences where the alternatives for the auto bid are themselves at-large locks, when these bid-stealers lose its a team that otherwise had no chance to make the tournament that takes the conference’s auto bid. These teams, should they get an at-large bid, are essentially stealing a bid from the at-large pool. (Seth Greenberg, above, is not happy that a bid-stealer took a bid from his Virginia Tech Hokies.) Continue reading »
2 comments » | College Basketball, March Madness
February 3rd, 2013 — 2:27pm
Based on their respective records this might sound crazy. Brady has three rings, five total Super Bowl appearances, and a record 17 playoff victories. Manning on the other hand: a below .500 playoff record, just one Super Bowl ring, and a record eight “one-and-outs”. How could anybody in their right mind choose the latter over the former? It’s amazing what a little perspective can do. Let’s start from the beginning.
(If you haven’t read the first three parts of this series, they introduce and explain all of the concepts used here: Part I, Part II, and Part III.) Continue reading »
28 comments » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, talent distribution, team evaluation
February 1st, 2013 — 1:08am
*UPDATE: I’ve temporarily changed the blog theme so that the tables in this post will be sortable and searchable.*
With the tedious boring stuff out of the way (if you missed the boring parts, here is boring part 1 and part 2), it’s time for the payoff. I’ll post some results and comment on some of the more interesting findings.
First, the caveats, the fine print. All games from 2000-2012 are included, regular season is assumed unless otherwise noted. From last post, we defined the “QB of record” for each game; that is instead of the starting QB we’ll use the QB who had the most dropbacks for his team in each game (dropbacks = pass attempts + sacks). Again from the previous posts, we defined different phases of the game, which we’ll measure by Expected Points Added (EPA)–despite having my own expected points model, I decided to borrow Brian Burke’s more well-known EP model for this series. Those phases are defense, special teams and offense; most of the time here we’ll be dividing offense into two parts: QB EPA, which are plays where the QB is the passer or rusher, and Non-QB EPA which is all other offensive plays. While part 1 showed that QBs have control over QB EPA but little to no influence over Non-QB EPA, Defensive EPA, or Special Teams EPA that should not be confused with QBs having all control over QB EPA. While that is heavily influenced by the quarterback, receivers, lineman, running backs, the opposing defense, etc. all have some impact as well on these plays.
With the disclaimers out of the way, let’s dive right in. Continue reading »
Comment » | descriptive, Evaluating QBs Series, Football, offense versus defense, player evaluation, talent distribution, team evaluation