I agree with all of that. I guess my point was I think perception and reputation is playing a bigger part than ever before because of smaller sample sizes.
And I agree with that.
What I’m saying is in years past I think the ratio of results to perception was weighted more heavily to results. This year, especially in Ohio St’s case, perception is playing more of a part.
I think in previous years the committee has had the luxury to grade teams largely on their performance. This year it doesn’t. Ohio St’s results have been good but you could give those same results to many other teams in the nation and not garner a #4 ranking. That ranking is as much about what they expected Ohio St to be as it is what they have been.
I think that is a product of necessity.
If, after one game each year, you ranked teams solely based on that one game all you would have is an ordered list of MOV. However, this would, in almost all cases, be ridiculously wrong. Ie, if Ohio State started a hypothetical year with a 1 point loss to Bama while USC started with a 1 point loss to SJSU, those are not equal. In all probability, tOSU's hypothetical 1 point loss to Bama suggests that tOSU is VERY good while USC's hypothetical 1 point loss to SJSU suggests that USC is terrible.
To get around this issue most voters and computer models use their preseason expectations to evaluate results and, in this hypothetical they would mostly come to the conclusion that:
- Ohio State had a close loss to possibly the best team in the Country, they are probably at least a top-10 team despite the loss.
- USC had a close loss to a bottom-feeder, they are probably not worthy of anything close to the top-25.
However, as this hypothetical season progresses it *COULD* turn out that Bama sucks and tOSU's 1-point loss to them is a REALLY bad reflection on the Buckeyes. It also *COULD* turn out that SJSU is REALLY good and USC's 1-point loss to them reflects pretty well on the Trojans.
Most computer models have an built in algorithm to deal with this issue. A typical example would be:
- 100% preseason expectation for the preseason rankings (obviously).
- 90% preseason expectation and 10% season results for the week #1 rankings.
- 80% preseason expectation and 20% season results for the week #2 rankings.
- 70% preseason expectation and 30% season results for the week #3 rankings.
- 60% preseason expectation and 40% season results for the week #4 rankings.
- 50/50 preseason expectation and season results for the week #5 rankings.
- 40% preseason expectations and 60% season results for the week #6 rankings.
- 30% preseason expectations and 70% season results for the week #7 rankings.
- 20% preseason expectations and 80% season results for the week #8 rankings.
- 10% preseason expectations and 90% season results for the week #9 rankings.
- 100% season results for the week #10 and beyond rankings.
In a normal season this works out fine. Fans of a team like Indiana that substantially exceeded expectations this year would have a complaint for a while but eventually the rankings are based solely on season results so it all works out. In a normal season most teams play 12 games over 13 weeks before the CG's so by this point in the season the rankings are already 100% based on season results and have been for a few weeks.
The problem this year is that teams like tOSU and USC (with only 5 games played so far) are still at a 50/50 balance and even after the CG's (assuming they are played) 40% of their ranking would still be based on preseason expectation.