Publish date:

The Four-Step Guide to (Almost) Perfect College Football Predictions

Shock Linwood

Shock Linwood

We think we know. We act like we know. But we do really know what makes a team good?

Image placeholder title

In 2014, Florida State lost four starters from its defensive front seven and regressed dramatically in run defense. Oklahoma State returned just eight total starters and went from the top 20 to barely bowl eligible.

Ohio State went from very good to great in recruiting and did the same on the field. Texas’ recruiting regressed, and the Longhorns obtained just middling success under a new coach.

Alabama was awesome in 2013 and remained awesome in 2014.

We think we know how to figure out who’s going to be good or bad from year to year, who’s going to surge or collapse — and we’re mostly correct. You start with how good a team was last year, then you look at returning starters (and stars), then you look at recruiting, and voila! We love that you’re an Athlon Sports reader, but virtually any preview of any kind is going to take that approximate approach.

But how much of a difference do these factors make? Are we ignoring other key indicators when we look at whether a team will improve or regress? Are we overvaluing the starters who left or those who return? And are we interpreting recruiting rankings the right way?

To begin to answer these questions, we’re going run some correlations. Remember those from math class? How a correlation of zero means there’s no relationship between variables, but a correlation approaching 1 or negative-1 means the relationship is strong?

Let’s look at the strength of the correlations between a given indicator — returning starters, last year’s output, et cetera — and two numbers: A team’s percentage of points scored in a given year (it’s more detailed and descriptive than simple win-loss record) and a team’s advanced stats.

We’ll look at percentage of points scored instead of win percentage because it is a more accurate descriptor. Florida State finished both 2013 and 2014 with a 13–0 regular-season record, but the Seminoles entered the 2013 postseason having scored 83.2 percent of the points in their games. In 2014, they had scored only 60.2 percent. One FSU team was demonstrably better than the other despite identical records.

Meanwhile, for advanced stats, we’re going to lean on the work of Football Outsiders (a site for which I have played a role since 2008), namely the F/+ ratings, the official FO college football rating. F/+ compares a team’s per-play and per-drive output to a baseline expectation (based on the opponent) and tells you how far above or below average that team performed. For instance, Ohio State finished 2014 ranked first in the F/+ ratings at plus-69.6 percent, Eastern Michigan finished 128th at minus-65.9 percent, and 87 of 128 teams finished between plus- and minus-30 percent.

F/+ is a healthy, robust, and (most important) opponent-adjusted number, and it is good for these purposes. But you can use your computer rating of choice, and it is likely to tell you a similar story as the one you find here.

This feature from Bill Connelly of SB Nation can be found in ACC, Big 12, Big Ten, Pac-12 and SEC issues of the Athlon Sports college football preview annual.

Recent Performance

It is an undying, somewhat boring truth in college football: How you played last year is the best indicator of how you will play this year. Some teams change, but only so many do, and it is difficult to find a sport as rigid as college football, despite parity measures like the current 85-man scholarship limit.

Correlation between your F/+ rating from last year and your F/+ rating from this year: 0.742. Correlation between last year’s percentage of points scored and this year’s: 0.466.

In a given season, about 54 percent of FBS teams’ F/+ ratings are within 15 percent of what they were the year before. Things change, and things stay the same.

But in some cases, using just last year’s data can give us a blurry picture if a team suffered from injuries, suspensions, drastic turnover or any other maladies that affect teams. If we use a weighted five-year history, in which seasons from two to five years ago are given about eight to 10 percent weight each, we can raise the above F/+ correlation to about 0.747. That’s not much of an improvement, but it’s something.

So what does this mean for 2015? The chart below shows last year’s top 15 teams according to F/+. A good portion of them will be in or near the top 15 again this fall.

A good system of opponent-adjusted ratings can start the conversation in the right place. When thinking about how good a team was or wasn’t the year before — the starting point of any sort of projection or prediction — something like this gives you a clearer picture than “they went 11–2.”

One thing to keep in mind regarding advanced stats: Wins and losses don’t mean a lot. The numbers are designed to look at every non-garbage time play and drive and project how teams may have performed over a much longer period of time, not just 12 games. Yes, Ole Miss finished ahead of TCU; that’s because Ole Miss was much better than TCU for the first two months of the year before fading rather dramatically.

Returning Starters

You work with the tools you’ve got. Most of us understand that boiling an offense’s or defense’s turnover into a number between 0 and 11 is over-simplification. The quality of the backups matters, and besides, if two players start six games each at a given position, and one was a senior, is the other a “returning starter”?

There are flaws, but in a “perfect vs. good” kind of way. In the absence of perfect tools, we use decent, readily available ones. If it were possible to standardize a higher level of data — percentage of rushing yards returning, percentage of career starts on the offensive line, etc. — that would be fantastic, but even that tells us only so much about quality. We can apply extra weight to the quarterback position or to lost starters who were drafted or given All-America or all-conference honors, too, if we want to.

For now, though, we’ll stick to the basics. While the standard returning starter data is flawed, it’s still pretty useful:

Correlation between returning offensive starters and your advanced offensive ratings: 0.290. Correlation between returning offensive starters and your percentage of points scored: 0.254.

Correlation between returning defensive starters and your advanced defensive ratings: 0.271. Correlation between returning defensive starters and your percentage of points scored: 0.215.

These aren’t significant correlations, but they’re solid. And looking at year-to-year averages, you can see a pretty clear trend. If we convert a team’s FO efficiency ratings (offensive and defensive) into a per-game point total, you can start to see the impact starter experience can have on average.

There is some blurriness on the edges — teams with four returning starters regressing more than teams with one to three, teams with 10-11 returning starters improving only a marginal amount — but that’s a sample size issue. There’s a potential range of six to 10 points per game between those returning almost no starters and those returning almost everybody. And if you return six or seven starters, you’re basically breaking even.

The lines are similar on the defensive side of the ball.

Recommended Articles

There will always be plenty of exceptions. Just last year, Wisconsin returned three defensive starters and still fielded a high-caliber unit, while TCU returned three offensive starters and improved dramatically. Those exceptions are why the correlations exist but aren’t incredibly significant. But these starter figures still tell us quite a bit about where to set the bar. If the above averages held true, the chart to the right shows what kind of shifts we might see from last year’s top-ranked teams.

Red alert, Mississippi State and Clemson fans. You better hope Dak Prescott and Deshaun Watson are even better (and healthier) than they were last year.

Recruiting

Recruiting rankings are worthless! Recruiting rankings are everything! Arguing about the potential and usefulness of the work Rivals, 247Sports, ESPN, Scout and others do has become a permanent part of the college football calendar each January and early February. And to be sure, these assessments are tricky.

If you’re a brand-new recruiting service, and you’re looking to use every piece of information available to you to craft the strongest possible prospect ratings, what’s one piece of information you’d be incredibly smart to use? Offer lists. If Alabama (or Ohio State, or USC, or Florida State, or any other national power) offers a player, there are strong odds that this player is pretty good. To say the least, the Tide and others like them have track records.

One problem with this: If you use offer lists to make your ratings more accurate, you’re also introducing a bit of circularity. If an Alabama offer gets a player ranked more highly, then Alabama is always assured of a high team ranking. Successful teams will then always end up with good recruiting rankings, both because they’re landing the best prospects (and they are) and because prospects they land get a boost, or as angry fans have long called it, a Bama Bump.

Recruiting services certainly don’t admit to changing or rethinking ratings based on offers, but if such circularity does exist, it doesn’t change one simple fact: Recruiting rankings are awfully predictive.

Correlation between your five-year recruiting averages and your F/+ rating: 0.666. Correlation between your five-year recruiting averages and your percentage of points scored: 0.428.

If you want to be suspicious about recruiting rankings, know this: Correlations with two-year rankings are even higher.

Correlation between your two-year recruiting averages and your F/+ rating: 0.680. Correlation between your two-year recruiting averages and your percentage of points scored: 0.454.

Since most of your two-deep is going to consist of players who were signed more than two years ago, that suggests that there is a relationship between recruiting rankings and performance that ties mostly to recent performance, not the actual ratings of your players on the field.

Either success leads to better recruiting, which leads to more success, or success leads to more benefit of the doubt in recruiting, which leads to better ratings.

Regardless, here’s a look at Athlon’s preseason top 20 teams and their recent recruiting averages:

Luck and randomness

The game of football, played with a pointy ball, brings to the table quite a bit of randomness. There’s no way around it. But we don’t necessarily take that into account when we set expectations for a given team, and we probably should.

In 2013, Oklahoma and Houston were insanely lucky teams. The Sooners recovered all nine fumbles that occurred in late-season wins against Kansas State, Oklahoma State and Alabama, and turnovers played heavy roles, especially in each of the last two wins. Recover only five of those nine, and the Sooners probably don’t beat either Oklahoma State or (if they still made the Sugar Bowl) Alabama. And if they don’t beat those teams, they don’t head into 2014 with what turned out to be unreasonably high expectations.

Houston, meanwhile, nearly broke the turnovers luck scale in 2013. The Cougars seemingly overachieved, improving from 5–7 to 8–5 and threatening for a while to steal the AAC title from UCF and Louisville despite playing a freshman quarterback. Their turnover margin was a nearly incomprehensible plus-25, but according to national averages for fumble recovery rates (which always trend toward 50 percent over time) and the ratio of interceptions to passes broken up (on average, a team intercepts one pass for every three to four breakups), it should have been closer to about plus-4. They recovered more than 60 percent of all fumbles, they intercepted an unsustainably high number of passes, and their opponents dropped an unsustainably high number of potential interceptions.

On paper, Houston improved in 2014, but the Cougars’ luck regressed drastically toward the mean (expected turnover margin: plus-6; actual: plus-8), they lost badly to UTSA, finished 7–5 (before a miraculous bowl win), and saw their head coach fired.

Is there a correlation between your turnovers luck (i.e. the difference between your expected and actual turnover margins) and your year-to-year improvement or regression? A bit.

Correlation between your turnovers luck and next year’s F/+ rating: 0.130. Correlation between your turnovers luck and next year’s percentage of points scored: 0.186.

Since ratings systems like F/+ are normalized to ignore a lot of luck factors, you would assume it would be less affected by luck than actual points scored. Turnovers bite randomly, and the effects will be pretty selective. Still, it’s a factor with correlations only slightly weaker than returning starters. That makes it worth noting.

And as you would expect, the correlations get stronger for those who were particularly lucky or unlucky. Much stronger.

Correlation between your turnovers luck and next year’s percentage of points scored (for only teams in the top and bottom 10 percent of turnovers luck): 0.357.

So if you were particularly lucky or unlucky last season, that luck is probably going to change this fall, and it could make a pretty significant difference in the amount of points you score and allow. Who needs to be on the lookout in this regard?

Luck is part of the game of football, so it probably isn’t a surprise that two of last year’s top six teams in the pre-bowl College Football Playoff rankings (No. 2 Oregon and No. 6 TCU) are on this list. No. 5 Baylor (plus-6.4) barely missed inclusion, too. Still, it might be difficult for those teams (not to mention other poll darlings like Michigan State) to repeat last year’s success. The inclusion of Georgia and Ole Miss is also noteworthy.

Meanwhile, some pretty interesting names appear on the unlucky list, too.

After the luck of late-2013, Oklahoma’s karma was pretty awful in 2014, and this doesn’t even include “good kickers missing untimely kicks” luck like what hurt the Sooners against Oklahoma State and, particularly, Kansas State.

Some other teams that fared far worse than expectations show up here: Oklahoma State, Texas Tech, Miami. Colorado and Washington State were also expected to do better than they did, and luck played a role in that disappointment.

But the two most interesting names on this list are two teams that enjoyed plenty of success: Alabama and Marshall. These teams went a combined 25–3 in 2014, with 16 wins coming by a margin of at least 19 points. But they were probably even more dominant than the scores would attest, and in their three losses, these teams had a minus-four turnover margin.

How good will your team be this year? Ask yourself these questions in this order: How good were we last year? And how good have we been for the last five years? How are our recruiting rankings — getting better or worse? Are we returning more or fewer than about 6-7 starters on offense and defense? And how lucky were we last year?

Not everybody actually wants to set realistic expectations for their team, but asking those five questions is the best roadmap for doing just that.