The Massey ratings have been around since December of 1995, according to his site. The explanation he lists is actually for his rankings that include scoring margin, and not those that are used in the BCS (which can’t use score margin).
However, perhaps we can derive some understanding of Massey’s BCS ratings if they are calculated similarly to his other ratings.
In essence, each game “connects” two teams via an equation. As more games are played, eventually each team is connected to every other team through some chain of games. When this happens, the system of equations is coupled and a computer is necessary to solve them simultaneously.
The ratings are totally interdependent, so that a team’s rating is affected by games in which it didn’t even play. The solution therefore effectively depends on an infinite chain of opponents, opponents’ opponents, opponents’ opponents’ opponents, etc. The final ratings represent a state of equilibrium in which each team’s rating is exactly balanced by its good and bad performances.
Overall, Massey treats all games as connected in one big equation (or set of equations). Solving this for the ratings of each team that best satisfy this equation gives his final ratings. As he mentions, this implicitly accounts for schedule strength as long as all teams are connected (which they will be after a few weeks of the season).
In his scoring margin ratings, Massey uses a Game Outcome Function that essentially gives the probability of a team winning a rematch based on the score. Games that end with a close score will have GOF’s closer to 50%, while teams that win comfortably will have GOF’s closer and closer to 100%. However, in the BCS scoring margin is not allowed as a parameter so it is unclear how Massey handles this.
Massey does not give many details but overall the theory behind his ratings (assuming they are similar to his other ratings) seem solid.