There are seven computer rankings which make up a fifth of the
Bowl Championship Series’ rating system. We have broken each one
down so you can see what makes each ranking so special and
determine why some systems will have large discrepencies between
one another, in terms of rankings

1. The Anderson & Hester Rankings: These rankings are
based around four concepts. The first is that no team will be given
extra points for running up the score, meaning a one-point win is
the same as a 60-point trouncing. Secondly, there is no prejudging
of the teams, and rankings do not appear until week five. The third
and fourth concepts are essentially the same as each team is judged
by their opponents’ record and by their opponents’ opponents’
record (both of which are also based on their own conference’s
record, so that a true strength of schedule may be determined).

2. The Richard Billingsley Report: This rating is
formulated around win-loss records, opponent strength – to be shown
by record, rating and rank – and the most recent performances.
Billingsley claims that his system is also based on the U. S.
Constitution because of his “checks and balances” system. He takes
each team’s ranking and determines his own point spread by
subtracting the two rankings, dividing by two and by giving the
home team an extra three points. He does this so he can determine
how big of an upset it would be if the underdog won, and also to
determine how much a team should move up or down should it win or
lose the game. That way a No. 1 team beating the No. 117 team isn’t
jumped by the No. 55 team defeating the No. 56 team. A new rating
is given each week, as just the rank is carried over.

3. The Colley Matrix: Princeton PhD Wes Colley goes to
town on his ranking. We don’t know the best way to explain it, so
we suggest going to www.colleyrankings.com and reading his
report.

4. Kenneth Massey: What makes Massey unique is his Game
Outcome Function and the fact he uses just the score, venue and the
date as his input. He divides the score into offensive and
defensive points and those with a home-field advantage adjusted for
as well so that toughness of schedule will benefit those who play
on the road. His Game Outcome Function determines the likelyhood
that the outcome of the game would be the same if replayed (i.e. a
team that wins by 50 is more likely to have another win in a
rematch than a team that wins by three. Also taken into
consideration is how many points is scored by each team. If there
are two three-point games – 27-24 and 10-7 – it is more likely that
the team that won 10-7 would win before the team that won 27-24.
His reasoning for this is that a three-point deficit is easier to
overcome in a high-scoring game than in a low-scoring one.) The
G.O.F. is then factored in with the fact that every team is
eventually going to be connected to one another by each team
playing one another. This leads to being able to determine which
team would beat which if given the chance based on percentages of
past successes.

5. The New York Times: To put it in the words of The New
York Times, the ranking ” is based on an analysis of each team’s
record with emphasis on who won and against what quality of
opposition. The quality of an opponent is determining the
opponent’s record against other teams.” The Times doesn’t make
anything else of their’s public.

6. Jeff Sagarin’s NCAA Football Rankings: Out of the
individual polls, Sagarin has been around the longest. His setup is
very much like Massey’s, aside from the fact that Sagarin had
developed his formula long before Massey did. Generally, until all
teams are connected (meaning team A and team B – which have never
played – are connected by team C, which played A, and team D, which
played team B, once teams C and D play each other), there is no
actual No. 1 team. Everything is weighted mathmatically by
preseason rankings until then. Once all teams have played, the
ELO-CHESS is used so that just winning and losing matter and not
the score margin – when factored with strength of schedule. The BCS
then uses just the ELO-CHESS and not Sagarin’s personal rating
(which is why some of you were probably wondering why Northern
Illinois was No. 27 in Sagarin’s explanation and No. 8 in the
BCS).

7. Peter Wolfe: Wolfe is completely concerned with the
likelyhood that a team would beat another team. What is done is
that each team is assigned a rating. That rating is divided by the
division of its rating and its opponent’s rating. The probabilities
are all multiplied together to determine the total ranking of the
team. All wins are viewed as equals, which is why Texas Christian
(No. 6) and Northern Illinois (No. 7) are not punished for their
weaker schedules.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Leave a comment

Your email address will not be published.