Evaluating Jones' SoccerRatings and the RPI
+3
fan from afar
Auto Pilot
UPSoccerFanatic
7 posters
Page 1 of 1
Evaluating Jones' SoccerRatings and the RPI
I've been pretty quiet (for me) on PilotNation, but I haven't disappeared. Instead, I've been very busy on a project evaluating both the RPI and Albyn Jones' SoccerRatings. Specifically, I've been figuring out how well the rating systems correlate with the actual results of games during the 2008 regular season (including conference tournaments). Although I've been posting lots of details on BigSoccer, you are the first ones to get the most interesting (and brilliantly derived) information in this post. (FANatic, this involves lots of numbers crunching and should be right down your line!)
Although people tend to think that one should measure the worth of the RPI and systems such as SoccerRatings by how they predict the results of upcoming games, and in particular the NCAA Tournament games, that really isn't right, at least from an NCAA Tournament perspective. From a Tournament perspective, the important questions are (1) what teams, based on their performance during the season, have earned the right to be one of the 34 at large teams participating in the Tournament; (2) what teams, based on their performance during the season, have earned seeds; and (3) of the teams that have earned seeds, where should they be seeded, based on their performance during the season?
If I am right that those are the important questions, then the only criterion for evaluating a rating system to be used for NCAA Tournament at large selections and seeding should be how it correlates with the results of games already played (rather than how it correlates with games yet to be played). I hope that's clear, because it's right. (Being an attorney, I typically use all sorts of qualifiers, so it's out of character for me to make such an absolute statement. But, it's true. )
With that intro, here's what I've done. I've created a program (Computer geeks, is that correct, or have I merely developed formulas within the Excel program?) to evaluate how well different rating systems correlate with the actual results of the 2008 regular season. Although the program can produce more detail, it has two fundamental products (so far). The first product tells us this:
(1) During the season, how often did teams with higher ratings in the particular rating system win their games? In other words, how well do the ratings correlate with actual game results?
The second product tells us this:
(2) During the season, in inter-regional games, how often did teams with higher ratings in the particular rating system win their games? In other words, how well do the ratings correlate with actual inter-regional game results? This question is particulary important to us, since it gets at the extent, if any, to which the rating systems unfairly under-rate the West Region's teams.
Here are the results. For fans of Albyn Jones, there's bad news and good news:
For the first product, correlation with regular season results as a whole, I evaluated the following: (a) the unadjusted RPI (i.e., not adjusted for bonus or penalty points based on good wins/ties and bad losses/ties); (b) the adjusted RPI; (c) the "non-conference" RPI, which is a subset of the RPI the NCAA used this year to deny seeds to RPI-ranked #9 Penn State and #15 Washington; and (d) Albyn Jones' SoccerRatings.
I haven't even attempted to master the PilotNation chart system, so here are the first product's results in "longhand":
Unadjusted RPI:
The team with the higher unadjusted RPI won games 74.0% of the time.
It tied games 10.7% of the time.
It lost games 15.3% of the time.
Adjusted RPI:
The team with the higher adjusted RPI won games 73.9% of the time.
It tied games 10.7% of the time.
It lost games 15.4% of the time.
(In other words, from an overall perspective, the bonus/penalty point adjustment adds nothing to the reliability of the RPI.)
Non-Conference RPI:
The team with the higher NCRPI won games 69.4% of the time.
It tied games 10.7% of the time.
It lost games 19.9% of the time.
(In other words, from an overall perspective, the non-conference RPI is less reliable than the unadjusted or adjusted RPI.)
Jones' SoccerRatings:
The team with the higher SoccerRating won games 73.3% of the time.
It tied games 10.7% of the time.
It lost games 15.9% of the time.
In other words, this is the bad news for SoccerRatings fans. From an overall perspective, Jones' SoccerRatings are about the same reliability as the unadjusted and adjusted RPI.
BUT, it's more complicated than that! The second question is how the different systems treat the six NCAA regions, which are the Central, Great Lakes, Mid-Atlantic, Northeast, Southeast, and West (us) regions. In order to do this analysis, my program looked only at inter-regional games, in other words games of teams from one region against teams from the other regions. In addition, it looked at (1) inter-regional games that teams from one region won, that they should have won according to their rankings and (2) inter-regional games that teams from the region won, that they should have lost according to their rankings. The program then combined those two sets of numbers to evaluate how each rating system treated each region's teams.
To establish a benchmark, let's use the Jones system. As indicated above, teams won 73.3% of the games they "should have" won according to their ratings. (This completely disregards home field advantage, since the RPI -- except for the bonus/penalty adjustments -- disregards home field advantage.) That means, conversely, that teams won 26.7% of the games they "should have" lost. So, if you're looking at a region that performed exactly at the "average," if you add together the % of games it won that it should have won and the % of games it won that it should have lost, you get 100%. So, that's the norm. If you add the same actual numbers for a region together and get more than 100%, then the team is "out-performing" its ranking, which means that the system is under-ranking the team. If you get less than 100%, then the team is "under-performing" its ranking, which means that the system is over-ranking the team.
[Insert here, being a wine-bibber rather than a beer-swigger: Pour more wine into the glass and take a big sip!]
The results, in descending order:
Unadjusted RPI:
West Region: 121.8%
Great Lakes Region: 106.7%
Southeast Region: 101.6%
Northeast Region: 98.1%
Central Region: 91.7%
Mid-Atlantic Region: 91.6%
In other words, the unadjusted RPI big time under-rates the West Region; also under-rates the Great Lakes and Southeast Regions; over-rates the Northeast Region; and significantly over-rates the Central and Mid-Atlantic Regions.
[Notwithstanding consumption of wine, just remembered to save material so far so I don't accidentally erase it and have to start over.]
Adjusted RPI:
West Region: 122.1%
Great Lakes Region: 105.8%
Southeast Region: 101.1%
Northeast Region: 98.4%
Mid-Atlantic Region: 92.3%
Central Region: 91.7%
In other words, if you look at these numbers closely, they actually slightly increase the difference in how the system treats the regions: the distance between how the system treats the West and Central Regions is greater than for the unadjusted RPI. The adjusted RPI is worse than the unadjusted RPI.
Non-Conference RPI:
West Region: 122.9%
Great Lakes Region: 109.2%
Southeast Region: 106.8%
Northeast Region: 98.2%
Mid-Atlantic Region: 93.8%
Central Region: 81.8%. WOW!
In other words, the Non-Conference RPI big time increases the distance between the West and Central Regions and also increases the discrimination against the Great Lakes and Southeast Regions. That's in addition to its being less reliable than either the unadjusted or adjusted RPI.
Jones SoccerRatings:
Southeast Region: 104.3%
Great Lakes Region: 103.1%
West Region: 102.4%
Northeast Region: 102.1%
Central Region: 95.9%
Mid-Atlantic Region: 93.8%
It is crystal clear that Jones' system is much fairer at rating teams from the different regions in relation to each other than any of the RPI-based systems. A difference of 10.5% from top to bottom as compared to differences of 30.4% to 41.1%. HOLY COW!
THE EVIDENCE IS IN: THE NCAA IS S*****ING THE WEST REGION, WHETHER INTENTIONALLY OR UNINTENTIONALLY.
Am I right? Am I wrong? Pitch in with your thoughts.
Although people tend to think that one should measure the worth of the RPI and systems such as SoccerRatings by how they predict the results of upcoming games, and in particular the NCAA Tournament games, that really isn't right, at least from an NCAA Tournament perspective. From a Tournament perspective, the important questions are (1) what teams, based on their performance during the season, have earned the right to be one of the 34 at large teams participating in the Tournament; (2) what teams, based on their performance during the season, have earned seeds; and (3) of the teams that have earned seeds, where should they be seeded, based on their performance during the season?
If I am right that those are the important questions, then the only criterion for evaluating a rating system to be used for NCAA Tournament at large selections and seeding should be how it correlates with the results of games already played (rather than how it correlates with games yet to be played). I hope that's clear, because it's right. (Being an attorney, I typically use all sorts of qualifiers, so it's out of character for me to make such an absolute statement. But, it's true. )
With that intro, here's what I've done. I've created a program (Computer geeks, is that correct, or have I merely developed formulas within the Excel program?) to evaluate how well different rating systems correlate with the actual results of the 2008 regular season. Although the program can produce more detail, it has two fundamental products (so far). The first product tells us this:
(1) During the season, how often did teams with higher ratings in the particular rating system win their games? In other words, how well do the ratings correlate with actual game results?
The second product tells us this:
(2) During the season, in inter-regional games, how often did teams with higher ratings in the particular rating system win their games? In other words, how well do the ratings correlate with actual inter-regional game results? This question is particulary important to us, since it gets at the extent, if any, to which the rating systems unfairly under-rate the West Region's teams.
Here are the results. For fans of Albyn Jones, there's bad news and good news:
For the first product, correlation with regular season results as a whole, I evaluated the following: (a) the unadjusted RPI (i.e., not adjusted for bonus or penalty points based on good wins/ties and bad losses/ties); (b) the adjusted RPI; (c) the "non-conference" RPI, which is a subset of the RPI the NCAA used this year to deny seeds to RPI-ranked #9 Penn State and #15 Washington; and (d) Albyn Jones' SoccerRatings.
I haven't even attempted to master the PilotNation chart system, so here are the first product's results in "longhand":
Unadjusted RPI:
The team with the higher unadjusted RPI won games 74.0% of the time.
It tied games 10.7% of the time.
It lost games 15.3% of the time.
Adjusted RPI:
The team with the higher adjusted RPI won games 73.9% of the time.
It tied games 10.7% of the time.
It lost games 15.4% of the time.
(In other words, from an overall perspective, the bonus/penalty point adjustment adds nothing to the reliability of the RPI.)
Non-Conference RPI:
The team with the higher NCRPI won games 69.4% of the time.
It tied games 10.7% of the time.
It lost games 19.9% of the time.
(In other words, from an overall perspective, the non-conference RPI is less reliable than the unadjusted or adjusted RPI.)
Jones' SoccerRatings:
The team with the higher SoccerRating won games 73.3% of the time.
It tied games 10.7% of the time.
It lost games 15.9% of the time.
In other words, this is the bad news for SoccerRatings fans. From an overall perspective, Jones' SoccerRatings are about the same reliability as the unadjusted and adjusted RPI.
BUT, it's more complicated than that! The second question is how the different systems treat the six NCAA regions, which are the Central, Great Lakes, Mid-Atlantic, Northeast, Southeast, and West (us) regions. In order to do this analysis, my program looked only at inter-regional games, in other words games of teams from one region against teams from the other regions. In addition, it looked at (1) inter-regional games that teams from one region won, that they should have won according to their rankings and (2) inter-regional games that teams from the region won, that they should have lost according to their rankings. The program then combined those two sets of numbers to evaluate how each rating system treated each region's teams.
To establish a benchmark, let's use the Jones system. As indicated above, teams won 73.3% of the games they "should have" won according to their ratings. (This completely disregards home field advantage, since the RPI -- except for the bonus/penalty adjustments -- disregards home field advantage.) That means, conversely, that teams won 26.7% of the games they "should have" lost. So, if you're looking at a region that performed exactly at the "average," if you add together the % of games it won that it should have won and the % of games it won that it should have lost, you get 100%. So, that's the norm. If you add the same actual numbers for a region together and get more than 100%, then the team is "out-performing" its ranking, which means that the system is under-ranking the team. If you get less than 100%, then the team is "under-performing" its ranking, which means that the system is over-ranking the team.
[Insert here, being a wine-bibber rather than a beer-swigger: Pour more wine into the glass and take a big sip!]
The results, in descending order:
Unadjusted RPI:
West Region: 121.8%
Great Lakes Region: 106.7%
Southeast Region: 101.6%
Northeast Region: 98.1%
Central Region: 91.7%
Mid-Atlantic Region: 91.6%
In other words, the unadjusted RPI big time under-rates the West Region; also under-rates the Great Lakes and Southeast Regions; over-rates the Northeast Region; and significantly over-rates the Central and Mid-Atlantic Regions.
[Notwithstanding consumption of wine, just remembered to save material so far so I don't accidentally erase it and have to start over.]
Adjusted RPI:
West Region: 122.1%
Great Lakes Region: 105.8%
Southeast Region: 101.1%
Northeast Region: 98.4%
Mid-Atlantic Region: 92.3%
Central Region: 91.7%
In other words, if you look at these numbers closely, they actually slightly increase the difference in how the system treats the regions: the distance between how the system treats the West and Central Regions is greater than for the unadjusted RPI. The adjusted RPI is worse than the unadjusted RPI.
Non-Conference RPI:
West Region: 122.9%
Great Lakes Region: 109.2%
Southeast Region: 106.8%
Northeast Region: 98.2%
Mid-Atlantic Region: 93.8%
Central Region: 81.8%. WOW!
In other words, the Non-Conference RPI big time increases the distance between the West and Central Regions and also increases the discrimination against the Great Lakes and Southeast Regions. That's in addition to its being less reliable than either the unadjusted or adjusted RPI.
Jones SoccerRatings:
Southeast Region: 104.3%
Great Lakes Region: 103.1%
West Region: 102.4%
Northeast Region: 102.1%
Central Region: 95.9%
Mid-Atlantic Region: 93.8%
It is crystal clear that Jones' system is much fairer at rating teams from the different regions in relation to each other than any of the RPI-based systems. A difference of 10.5% from top to bottom as compared to differences of 30.4% to 41.1%. HOLY COW!
THE EVIDENCE IS IN: THE NCAA IS S*****ING THE WEST REGION, WHETHER INTENTIONALLY OR UNINTENTIONALLY.
Am I right? Am I wrong? Pitch in with your thoughts.
Last edited by UPSoccerFanatic on Mon Dec 08, 2008 11:18 pm; edited 4 times in total
Re: Evaluating Jones' SoccerRatings and the RPI
I was wondering when we would get another "masterpiece" thanks for your hard work. BTW while using Excel cell functions is not technically computer programing which entails writing programing code in a computer language C++, Java, etc., excel is so versitle you can obtain "program like" results obviously. I used to use a spread sheet for acquiring property that was pretty complex (our CFO put it together over the period of a year). Only thing you have to watch out for is it won't debug and you can get a "bust" in the SH if you don't watch out.
But enough you have been more than a wealth of information. Thank you for quantifying our instincts about how the NCAA views the West.
But enough you have been more than a wealth of information. Thank you for quantifying our instincts about how the NCAA views the West.
Auto Pilot- Starter
- Number of posts : 864
Age : 69
Location : So Cal
Registration date : 2008-08-12
Re: Evaluating Jones' SoccerRatings and the RPI
UPSF - Great stuff. I read it once and decided I better do breakfast first. That was important. Also important is that I am learning how to use these little guys .
I understand the part about the proper evaluation of a rating system being its correlation to past games rather than as a predictor. So, help me get my mind around something that's probably very basic - At the point of the season that RPI is being used to set up the seedings, that is, when all regular season games have been played, why isn't the RPI (and Jones) 100% in compliance with the results of past games? All it has to do is look back at results and adjust itself, thereby giving an accurate picture of what has happened.
I understand the part about the proper evaluation of a rating system being its correlation to past games rather than as a predictor. So, help me get my mind around something that's probably very basic - At the point of the season that RPI is being used to set up the seedings, that is, when all regular season games have been played, why isn't the RPI (and Jones) 100% in compliance with the results of past games? All it has to do is look back at results and adjust itself, thereby giving an accurate picture of what has happened.
fan from afar- First man off the Bench
- Number of posts : 593
Age : 82
Location : upstate new york
Registration date : 2008-11-09
Re: Evaluating Jones' SoccerRatings and the RPI
Oh my gosh... UPSF... WOW. Thanks so much for this... your argument is so crystal clear. How anyone could disagree that the NCAA is underrating the West? It's just so obvious from your research.
And how unsurprising is it that the Central Region is way over-rated? The Big 12 has been awarded undeserved seeds for years now, and those teams always end up losing early - this year was no exception. How many more Big 12 seeds need to lose in the first two rounds before the NCAA realizes its seeding system is completely off?
Now... UPSF... I hate to even ask this after all your hard work, but have you considered messing with home/away, day/night, etc. to come up with your own prediction system that is more accurate than the RPI or Albyn Jones? You seem to be so close and you've obviously got a gift for the numbers...
And how unsurprising is it that the Central Region is way over-rated? The Big 12 has been awarded undeserved seeds for years now, and those teams always end up losing early - this year was no exception. How many more Big 12 seeds need to lose in the first two rounds before the NCAA realizes its seeding system is completely off?
Now... UPSF... I hate to even ask this after all your hard work, but have you considered messing with home/away, day/night, etc. to come up with your own prediction system that is more accurate than the RPI or Albyn Jones? You seem to be so close and you've obviously got a gift for the numbers...
Stonehouse- Draft Pick
- Number of posts : 3242
Age : 42
Location : Portland, OR
Registration date : 2007-06-07
Re: Evaluating Jones' SoccerRatings and the RPI
fan from afar wrote:I understand the part about the proper evaluation of a rating system being its correlation to past games rather than as a predictor. So, help me get my mind around something that's probably very basic - At the point of the season that RPI is being used to set up the seedings, that is, when all regular season games have been played, why isn't the RPI (and Jones) 100% in compliance with the results of past games? All it has to do is look back at results and adjust itself, thereby giving an accurate picture of what has happened.
Any numeric rating system, or poll for that matter, has a basic problem. The better team is not always going to win. So, given enough games, no matter how good the system or poll, there always will be some upsets. Think of it this way: When I was in college, our tiny Division 3 school played mighty Navy, which ended up being Division 1 champion the year of our game. Now, suppose we had played them 10 times in advance of our actual game. I assure you, they would have beaten us every time. But, on that one day, they outshot us 35-2, with 8 shots off the cross bar or posts, and still lost to us 2-1. Notwithstanding that one game, however, if you're looking at all 11 games, Navy gets a higher rating. Thus there was not a 100% correlation between the rating and the actual results.
Part of the question for Division I Women's Soccer is, how well would the best possible rating system correlate with actual results. I suspect that Albyn Jones' system is set up to come almost as close as possible. In a while, I'll do a similar analysis of the Massey system and if its correlation rate is about the same as Jones' (and the unadjusted and adjusted RPIs'), then I'm going to be fairly well satisfied that a 73-74% correlation is about as good as you can get. If that's the case, then the RPI does pretty well on that count.
The second part of the question, however, is this: Given that there's going to be an error rate (likely in the 26-27% range), are those errors distributed randomly as one would expect (although taking into account that you would expect less errors as the rating gaps between opponents increase) or is there a pattern to the error distribution. The problem I've identified is that for all three RPI iterations, but also for Jones, there appears to be a pattern to the error distribution that is regionally based. (There may be one that's conference based, too, but I haven't analyzed that yet.) The pattern is significantly lesser with Jones, though still significant, and is very significant for the RPI.
I'm going to ask Jones about this, but its appearance in his system confirms what theory told me would be true. If you have teams playing games primarily in their own regions, and if the average strengths of the regions vary, then unless you can figure out a way to build in an adjustment based on regional strength differences, the system inevitably is going to overrate teams from weak regions and underrate teams from strong regions. I'll be working to see if there is a way to build in an adjustment based on regional strength that will reduce (or eliminate?) this problem, but the test will be whether the adjustment can accomplish that while not significantly reducing the overall correlation of the system. This apparently is what the NCAA has done in developing the non-conference RPI, I assume because of concerns about discrimination among conferences -- the problem being that the non-conference RPI has a significantly lesser correlation with overall results than the other RPI iterations (and Jones).
[Note: I edited this because I had incorrectly transposed "overrate" and "underrate" in the previous paragraph.]
Last edited by UPSoccerFanatic on Tue Dec 09, 2008 2:31 pm; edited 1 time in total
Re: Evaluating Jones' SoccerRatings and the RPI
Thanks again, UPSF. I understand a little more now.
So, the U. of Podunk upset mighty Navy, maybe lost the rest of their games while Navy won the rest of theirs, so that Navy ends up with a much higher RPI rating, so that no matter how ratings might be adjusted by season's end, it was simply an upset and will cause the RPI rating result to not be a perfect 100%. If teams in the Jones system with the higher rating won 73.3% of the time, to use one example, doesn't that just mean that there were upsets 26.7% of the time? Or, put another way, there would have to be no upsets for a rating to be correct 100% of the time. Am I correct? I think I'm missing something, probably something obvious. Maybe the goal isn't a 100% score, but the best score?
So, the U. of Podunk upset mighty Navy, maybe lost the rest of their games while Navy won the rest of theirs, so that Navy ends up with a much higher RPI rating, so that no matter how ratings might be adjusted by season's end, it was simply an upset and will cause the RPI rating result to not be a perfect 100%. If teams in the Jones system with the higher rating won 73.3% of the time, to use one example, doesn't that just mean that there were upsets 26.7% of the time? Or, put another way, there would have to be no upsets for a rating to be correct 100% of the time. Am I correct? I think I'm missing something, probably something obvious. Maybe the goal isn't a 100% score, but the best score?
fan from afar- First man off the Bench
- Number of posts : 593
Age : 82
Location : upstate new york
Registration date : 2008-11-09
Re: Evaluating Jones' SoccerRatings and the RPI
Fan from afar... I think the ultimate goal is a system in which upsets happen at equal rates throughout all the regions. Upsets are to be expected and are part of the game. But what UPSF has shown is that certain regions have significantly higher/lower rates of upsets in inter-regional games, which means that the RPI is overvaluing/undervaluing teams from particular regions. And, no surprise, the West region gets the short end of the stick (i.e. lower RPI scores than they deserve).
Stonehouse- Draft Pick
- Number of posts : 3242
Age : 42
Location : Portland, OR
Registration date : 2007-06-07
Re: Evaluating Jones' SoccerRatings and the RPI
I understand that part, Stoney, and I was pretty sure it was true anyway.
I wonder if any of the "powers" will do something to correct some of this. I realize that money has to be a part of the equation in seeding, home games, etc., but they really don't seem to be as concerned as they should be about presenting teams with a fair playing field for the tournament. It really seems to slant towards my end of the country.
New avatar?
I wonder if any of the "powers" will do something to correct some of this. I realize that money has to be a part of the equation in seeding, home games, etc., but they really don't seem to be as concerned as they should be about presenting teams with a fair playing field for the tournament. It really seems to slant towards my end of the country.
New avatar?
fan from afar- First man off the Bench
- Number of posts : 593
Age : 82
Location : upstate new york
Registration date : 2008-11-09
Re: Evaluating Jones' SoccerRatings and the RPI
Stonehouse wrote:Fan from afar... I think the ultimate goal is a system in which upsets happen at equal rates throughout all the regions. Upsets are to be expected and are part of the game. But what UPSF has shown is that certain regions have significantly higher/lower rates of upsets in inter-regional games, which means that the RPI is overvaluing/undervaluing teams from particular regions. And, no surprise, the West region gets the short end of the stick (i.e. lower RPI scores than they deserve).
Absolutely correct. In the information I provided in my first post on this thread, 100% would represent the "norm," meaning a team winning the same percentage of games (73.3% for Jones), in which it has a higher rating than its opponent, as the "average" team wins; and being "upset" in the same percentage of games (26.7% for Jones), in which it has a higher rating than its opponent, as the "average" team is upset. (No matter what a rating system's average "should win/does win" and "should win/doesn't win" percentages are, their total always will be 100%. If all the regions deviated only slightly from 100% in their actual percentages, it might be attributable to pure randomness. The very large deviations in the RPI data, however, seem far to large to be random. Further, to me the deviations in Jones' data also appear too large to be random. He may be able to shed some light on that. (In addition, based on work I did last year, I'm confident there would be similar results if I were to run last year's games through the same process.)
Re: Evaluating Jones' SoccerRatings and the RPI
The obvious picks are well, obvious. These folks are the decision makers for the not so obvious. I was quite surprised to see two members from Oregon and one at our own beloved UP. If the west is getting dunked it is these folks that are executing the power jam. If I read this right the majority of this committe are FBS (pointy ball) people.
Division I Women's Soccer Committee
Legislation: Ten members, including six FBS representatives and four Division I or FCS representatives. One member shall be selected from each of the six women's soccer regions (Northeast, Mid-Atlantic, Southeast, Great Lakes, Central, West) and four members shall be selected at-large. No more than two may be from any one region. Quota of 50 percent administrators: 5.
Liaisons: Keshia Campbell, Teresa Smith
Chair: Barry Clements (Sep 2008 - Sep 2009)
FBS Associate A.D. / Community Relations Matt Wolfert
Ball State University Mid-American Conference SEP 2012
FBS CENTRAL REGION Associate Athletics Director Paul S. Bradshaw Baylor University Big 12 Conference SEP 2010
FBS GREAT LAKES REGION Head Women's Soccer Coach Robert L. KlattePurdue University Big Ten Conference SEP 2010
FBS SOUTHEAST REGION Peer Reviewer, Associate Athletic Director Barry Clements University of South Florida Big East Conference SEP 2009
FBS SWA, Associate A.D./SWA Lisa Campos
University of Texas at El Paso Conference USA SEP 2011
FBS WEST REGION SWA, Associate AD Marianne Vydra
Oregon State University Pacific-10 Conference SEP 2011
FCS SWA, Senior Associate AD Victoria Chun
Colgate University Patriot League SEP 2012
DI MID-ATLANTIC REGION Assistant Commissioner for Championships Melissa Conti Colonial Athletic Association Colonial Athletic Association SEP 2011
DI WEST REGION Associate Athletic Director Buzz Stroud
University of Portland West Coast Conference SEP 2009
DI MID-ATLANTIC REGION Head Women's Soccer Coach Tanya Vogel George Washington University Atlantic 10 Conference SEP 2010
Division I Women's Soccer Committee
Legislation: Ten members, including six FBS representatives and four Division I or FCS representatives. One member shall be selected from each of the six women's soccer regions (Northeast, Mid-Atlantic, Southeast, Great Lakes, Central, West) and four members shall be selected at-large. No more than two may be from any one region. Quota of 50 percent administrators: 5.
Liaisons: Keshia Campbell, Teresa Smith
Chair: Barry Clements (Sep 2008 - Sep 2009)
FBS Associate A.D. / Community Relations Matt Wolfert
Ball State University Mid-American Conference SEP 2012
FBS CENTRAL REGION Associate Athletics Director Paul S. Bradshaw Baylor University Big 12 Conference SEP 2010
FBS GREAT LAKES REGION Head Women's Soccer Coach Robert L. KlattePurdue University Big Ten Conference SEP 2010
FBS SOUTHEAST REGION Peer Reviewer, Associate Athletic Director Barry Clements University of South Florida Big East Conference SEP 2009
FBS SWA, Associate A.D./SWA Lisa Campos
University of Texas at El Paso Conference USA SEP 2011
FBS WEST REGION SWA, Associate AD Marianne Vydra
Oregon State University Pacific-10 Conference SEP 2011
FCS SWA, Senior Associate AD Victoria Chun
Colgate University Patriot League SEP 2012
DI MID-ATLANTIC REGION Assistant Commissioner for Championships Melissa Conti Colonial Athletic Association Colonial Athletic Association SEP 2011
DI WEST REGION Associate Athletic Director Buzz Stroud
University of Portland West Coast Conference SEP 2009
DI MID-ATLANTIC REGION Head Women's Soccer Coach Tanya Vogel George Washington University Atlantic 10 Conference SEP 2010
Auto Pilot- Starter
- Number of posts : 864
Age : 69
Location : So Cal
Registration date : 2008-08-12
Re: Evaluating Jones' SoccerRatings and the RPI
Buzz Stroud and Marianne Vydra (OSU) being on the Women's Soccer Committee actually may be important in terms of rule changes and getting the Committee to modify its selection factors, including modifying the RPI if that can be proved feasible to take into account the regional problem. However, once the Committee actually is making decisions on at large selections and seeds, there are conflict of interest rulings the Committee members must observe. Basically, they have to leave when something related to their team is at issue; and I believe there also may be a rule related to a team from their conference being at issue. Geez, do you have the details on that? If not, I may be able to find them somewhere. In addition, Larry Williams is on the Division I Cabinet that oversees all of the sports committees, which is where the real power is.
I think personal emails from fans to Buzz complaining about the discrimination against the West Region, which a PilotNation poster has proved to exist and be significant after three years' study, would be great. He's finishing up his term next September and will have nothing to lose if he makes a big stink between now and then. If we can get Buzz and Marianne to take the issue on, and maybe recruit some Great Lakes Region help, maybe we can make some headway.
I think personal emails from fans to Buzz complaining about the discrimination against the West Region, which a PilotNation poster has proved to exist and be significant after three years' study, would be great. He's finishing up his term next September and will have nothing to lose if he makes a big stink between now and then. If we can get Buzz and Marianne to take the issue on, and maybe recruit some Great Lakes Region help, maybe we can make some headway.
Re: Evaluating Jones' SoccerRatings and the RPI
UPSoccerFanatic wrote:
I think personal emails from fans to Buzz complaining about the discrimination against the West Region, which a PilotNation poster has proved to exist and be significant after three years' study, would be great. He's finishing up his term next September and will have nothing to lose if he makes a big stink between now and then. If we can get Buzz and Marianne to take the issue on, and maybe recruit some Great Lakes Region help, maybe we can make some headway.
I think that would be a waste of time and effort. First off, I don't think the UP administration are rabble rousers by nature (they are into that hierarchy thing, you know. The Catholic Church is one of the best at that), and though Buzz's leadership of the committee will be done, I don't think Buzz wants to finish his career at UP just yet.
More significantly, the committee Buzz sits on has relatively little decision making power. They are given the guidelines and are basically the grunts that carry them out. Buzz can't even be in the room when any selection or seeding decisions about UP are discussed.
The committee that oversees the one Buzz is on, the Division I Championships/Sports Management Cabinet does have some power, but most of the big decisions come from even higher.
This September, there was the now famous Travel memo which gives a pretty good insight into what the chain of command is. Read who is telling what to whom in that memo.
You'll see the NCAA staff guides the Championships/Sports Management Cabinet under the direction of the NCAA Executive Committee, which is pulling the real strings.
among the decisions was to push through this piece of work
What's significant about that? Well, it solidifies the decision currently in place to deem geography paramount to all the other criteria that the NCAA bylaws say should be used, specifically, Bylaw 31.1.3.2.1 which states that other selection criteria must be used. Specifically:this includes a requirement that teams should be placed in brackets per NCAA Bylaw 31.1.3.2.5 (geographically)
All the good stuff we've been screaming about for years. You'll see the rules we like are already there, just ignored. Getting rid of them by membership vote would be too much trouble.(a) Quality and availability of the facility and other necessary accommodations;
(b) Revenue potential (e.g., a financial guarantee or guideline that ensures fiscal responsibility and is ap-
propriate for the particular event, as recommended by the governing sports committee and approved
by the Championships/Sports Management Cabinet); (Revised: 11/1/07 e
(c) Attendance history and potential;
(d) Geographical location; and
(e) Championships operating costs. (Revised: 11/1/01)
More importantly for this discussion, it shows clearly the order of Hierarchy in the NCAA. The Selection Committee isn't even mentioned in the travel memo, and they are the ones that will implement the policy.
The RPI decisions work the same way. The Selection Committee is just following the guidelines set by those above. for soccer those committees would be the Competition Games Committee for soccer rules, and that same committee in the middle of the hierarchy, the Division I Championships/Sports Management Cabinet for RPI and tournament decisions. It also appears the NCAA staff is heavily involved, as the travel memo shows.
The short is that the people to lobby are way above Buzz's level. I think Larry Williams is on The Division I Championships/Sports Management Cabinet, I don't know who to lobby on the NCAA staff, and I believe fr. Beauchamp is on the Executive Committee.
Just to be clear, I'm NOT saying that's who you should email complaints to. I don't think that's how changes in the NCAA happen politically. (fiscal decisions, maybe, but real competition policy is political, and that would entail membership voting.)
The way to really get things done is I believe to go after the voting interests in the NCAA membership. Lobby the members from regions and conferences that are getting hosed, specifically the BCS conferences. It would be easier that way. The BCS schools have 60% of the voting, but only perhaps 20% of the schools, so lobbying them would be the most effective. Perhaps the easiest step would be to lay out the case to conference executives.
That and lobby whatever staff oversees the folks who manage the RPI. They could be very influential in guiding the Cabinet.
Geezaldinho- Pilot Nation Legend
- Number of posts : 11852
Location : Hopefully, having a Malbec on the square in Cafayate, AR
Registration date : 2007-04-28
Re: Evaluating Jones' SoccerRatings and the RPI
Geez, although for some things you're right -- the travel rules are a good example, as they involve millions of dollars -- I've followed the NCAA process in relation to how the different sports use the RPI. The different sports do use it differently: for example, the different sports have different bonus/penalty award amounts in the adjustment process; men's ice hockey just eliminated use of the adjustments; men's soccer just adopted more refined adjustments; and some sports (most notably basketball but also, as of this year, men's soccer) include an adaptation in the RPI formula to value home wins/losses differently than away wins/losses (a home win counts only as .6 of a win and an away win counts as 1.4 wins). These variants of the RPI typically originate in the specific sports committees, in our case the Division I Women's Soccer Committee, of which Buzz presently is the chair and will be until September. The sports committee interested in a change makes a recommendation to the Competitions Cabinet, which has to approve the change. Such changes in how the RPI is used, if I recall correctly, do not have to get a higher level of approval. Or, if they do have to go to the next level, these kinds of changes typically get rubber stamp approval at that level.
I would love it if the specific sports' committees' minutes were published, but they are not. However, the Competitions Cabinet's minutes are published, and when a specific sports committee has requested a tweaking of the RPI as used for their sport, the minutes reflect that as well as the ultimate action of the Cabinet. I do check the NCAA website periodically to see when the Cabinet is meeting and whether any issues I'm interested in are on the agenda. That's how I learned about ice hockey's getting rid of the adjustment process and the changes for men's soccer.
Now, if I were going to try to get the NCAA to stop using the RPI and use Jones' or Massey's systems instead, I think that would be revolutionary from the NCAA's perspective and your information on where to go to achieve that would be right. Given that the NCAA uses the RPI for every DI sport, and allows the use of other rating systems and polls only for basketball (I think, but don't know for sure, that that's the only sport where they deviate from pure RPI), I have felt that trying to get the NCAA away from the RPI as the only measuring device for DI Women's Soccer is not worth spending my time on. I think using Jones and Massey to reinforce conclusions I've reached from studying the RPI directly is ok, but I've not even tried to argue to the NCAA that it should use either of those systems in the DI Women's Soccer decision-making process. My effort, instead, is to show the shortcomings in how the RPI actually works and to try to get the NCAA to recognize those shortcomings and, if possible, make adjustments for them.
I think with this year's work, I have proved beyond doubt that the RPI has an inherent problem with inequality among regions. Now that I understand what the non-conference RPI is and have seen the NCAA use it in the seeding process, I'm certain that the NCAA knows that the RPI has an inherent problem with inequality among conferences. That being the case, I find it pretty hard to believe that long before I came along the NCAA didn't know there also were regional problems. Whether or not that's right, however, we now know that the NCAA has been willing to try to address the inequality among conferences issue at the specific sport level. Thus there's precedent, at a sports-specific level, to try to get them to address the inequality among regions issue. The real question is whether there's a way to tweak the RPI to do that. The non-conference RPI has done that to get at the conference problem, but at the cost of a sacrifice in the overall accuracy of the ratings. I'd like to develop a tweak to get at the regional problem, but without sacrificing overall accuracy.
The technical problem that separates how the NCAA has dealt with the conference problem from how it would have to deal with the regional problem is that teams play 50% or more of their games out-of-conference, whereas with one exception (the Mid Atlantic Region), teams play well less than 50% of their games out-of-region, with the West Region being the most extreme at only 20% or so out-of-region games. What this means is that you can't do a non-regional RPI using the same approach the NCAA uses to do the non-conference RPI and then simply look at the ratings of all the teams to see where they come in. There are too many weird anomalies (for example, when a team plays one game out of the region and comes away with what obviously is an upset win). What this means is we have to look not at how specific teams have done non-regionally but instead at how a region's teams have done non-regionally on average and then make an overall adjustment for all teams in the region based on that average. I can tell you that I've already developed one way of doing this that produces overall results just as accurate overall as the adjusted and unadjusted RPIs (and Jones) and more accurate than the non-conference RPI. I want to refine that method some to see if I can do even better, so I'm not ready to publish it yet. But, if there's a tweak such as what I'm working on that is as reliable as the RPI overall and that deals with the regional problem, that is something that would fit within the sports-specific RPI variant process for which there's plenty of NCAA precedent and that typically would start with a request from the Women's Soccer Committee. This also is something that Buzz could get on the table before he leaves the committee without necessarily having to make a big deal about it. Right now, it's not an issue that is even on the table so far as I know, so just getting it on the table would be a big step forward.
The other place a change such as this would need support, or at least not intransigent opposition, is from the NCAA staff that is responsible for the RPI. There's a guy who handles the RPI as used for DI Women's Soccer, with whom I correspond periodically, and there's also a guy who's overall responsible for the RPI, with whom I've also corresponded. I'll be approaching both of them on this issue.
Other institutions from the negatively affected regions also are going to need to pitch in, so I'll have to find a way to communicate with them and encourage them to speak up.
It all may be tilting a windmills. But, hey, better to try than do nothing!
I would love it if the specific sports' committees' minutes were published, but they are not. However, the Competitions Cabinet's minutes are published, and when a specific sports committee has requested a tweaking of the RPI as used for their sport, the minutes reflect that as well as the ultimate action of the Cabinet. I do check the NCAA website periodically to see when the Cabinet is meeting and whether any issues I'm interested in are on the agenda. That's how I learned about ice hockey's getting rid of the adjustment process and the changes for men's soccer.
Now, if I were going to try to get the NCAA to stop using the RPI and use Jones' or Massey's systems instead, I think that would be revolutionary from the NCAA's perspective and your information on where to go to achieve that would be right. Given that the NCAA uses the RPI for every DI sport, and allows the use of other rating systems and polls only for basketball (I think, but don't know for sure, that that's the only sport where they deviate from pure RPI), I have felt that trying to get the NCAA away from the RPI as the only measuring device for DI Women's Soccer is not worth spending my time on. I think using Jones and Massey to reinforce conclusions I've reached from studying the RPI directly is ok, but I've not even tried to argue to the NCAA that it should use either of those systems in the DI Women's Soccer decision-making process. My effort, instead, is to show the shortcomings in how the RPI actually works and to try to get the NCAA to recognize those shortcomings and, if possible, make adjustments for them.
I think with this year's work, I have proved beyond doubt that the RPI has an inherent problem with inequality among regions. Now that I understand what the non-conference RPI is and have seen the NCAA use it in the seeding process, I'm certain that the NCAA knows that the RPI has an inherent problem with inequality among conferences. That being the case, I find it pretty hard to believe that long before I came along the NCAA didn't know there also were regional problems. Whether or not that's right, however, we now know that the NCAA has been willing to try to address the inequality among conferences issue at the specific sport level. Thus there's precedent, at a sports-specific level, to try to get them to address the inequality among regions issue. The real question is whether there's a way to tweak the RPI to do that. The non-conference RPI has done that to get at the conference problem, but at the cost of a sacrifice in the overall accuracy of the ratings. I'd like to develop a tweak to get at the regional problem, but without sacrificing overall accuracy.
The technical problem that separates how the NCAA has dealt with the conference problem from how it would have to deal with the regional problem is that teams play 50% or more of their games out-of-conference, whereas with one exception (the Mid Atlantic Region), teams play well less than 50% of their games out-of-region, with the West Region being the most extreme at only 20% or so out-of-region games. What this means is that you can't do a non-regional RPI using the same approach the NCAA uses to do the non-conference RPI and then simply look at the ratings of all the teams to see where they come in. There are too many weird anomalies (for example, when a team plays one game out of the region and comes away with what obviously is an upset win). What this means is we have to look not at how specific teams have done non-regionally but instead at how a region's teams have done non-regionally on average and then make an overall adjustment for all teams in the region based on that average. I can tell you that I've already developed one way of doing this that produces overall results just as accurate overall as the adjusted and unadjusted RPIs (and Jones) and more accurate than the non-conference RPI. I want to refine that method some to see if I can do even better, so I'm not ready to publish it yet. But, if there's a tweak such as what I'm working on that is as reliable as the RPI overall and that deals with the regional problem, that is something that would fit within the sports-specific RPI variant process for which there's plenty of NCAA precedent and that typically would start with a request from the Women's Soccer Committee. This also is something that Buzz could get on the table before he leaves the committee without necessarily having to make a big deal about it. Right now, it's not an issue that is even on the table so far as I know, so just getting it on the table would be a big step forward.
The other place a change such as this would need support, or at least not intransigent opposition, is from the NCAA staff that is responsible for the RPI. There's a guy who handles the RPI as used for DI Women's Soccer, with whom I correspond periodically, and there's also a guy who's overall responsible for the RPI, with whom I've also corresponded. I'll be approaching both of them on this issue.
Other institutions from the negatively affected regions also are going to need to pitch in, so I'll have to find a way to communicate with them and encourage them to speak up.
It all may be tilting a windmills. But, hey, better to try than do nothing!
Re: Evaluating Jones' SoccerRatings and the RPI
UPSF, you have done amazing work and I encourage you to continue your investigation. You are definately on to something. I would add that I believe that the "East Coast Bias" is systematic through all NCAA sanctioned sports. It seems to me that if you could prove bias in the BCS and in the RPI that Men's Basketball uses and offer your proof to the media, the media would have a field day with it. To me, that's where the real pressure lies. If you could do that, you might just trigger other official investigations into the NCAA. Now that would be something!
-SciFi
-SciFi
Re: Evaluating Jones' SoccerRatings and the RPI
SciFi, one thing I should emphasize, which I do when I'm discussing this with people not from the West Region, is that the RPI's discrimination is not necessarily a West Coast/East Coast thing. It is a strong region/weak region thing. In essence, the RPI always will discriminate against strong regions and in favor of weak regions. If I were going to really characterize the RPI, I would say that in rating teams for participation in the NCAA tournaments and getting seeds, it favors distributing them on a geographically balanced basis in proportion to the number of teams in each region. This is directly inconsistent with an explicit NCAA policy that says the best teams should participate and get seeds, regardless of conference (substitute "region"), which I take to mean that it's irrelevant if a bunch of at large teams (or seeds) come from one conference or, conversely, no at large teams (or seeds) come from a conference. The reason the West Region gets the shaft these days in Women's Soccer is that the West Region, by a lot, is the strongest region. If that ever changes, then it will be some other region getting the shaft.
The only way there would be blanket discrimination would be if the West Region were the strongest in all sports. I'm sure that isn't true right now in lacrosse and ice hockey.
The only way there would be blanket discrimination would be if the West Region were the strongest in all sports. I'm sure that isn't true right now in lacrosse and ice hockey.
Re: Evaluating Jones' SoccerRatings and the RPI
SciFi, I am not sure investigations would produce much more than we already know about bias in women's soccer. Now, if money were changing hands that would be a horse of a different color.
Also, as you state, anything to do with basketball or the fairly recent BCS, you would have coaches and sports commentators clammoring across the nation.
Also, as you state, anything to do with basketball or the fairly recent BCS, you would have coaches and sports commentators clammoring across the nation.
Auto Pilot- Starter
- Number of posts : 864
Age : 69
Location : So Cal
Registration date : 2008-08-12
Re: Evaluating Jones' SoccerRatings and the RPI
UPSoccerFanatic wrote:SciFi, one thing I should emphasize, which I do when I'm discussing this with people not from the West Region, is that the RPI's discrimination is not necessarily a West Coast/East Coast thing. It is a strong region/weak region thing. In essence, the RPI always will discriminate against strong regions and in favor of weak regions. If I were going to really characterize the RPI, I would say that in rating teams for participation in the NCAA tournaments and getting seeds, it favors distributing them on a geographically balanced basis in proportion to the number of teams in each region. This is directly inconsistent with an explicit NCAA policy that says the best teams should participate and get seeds, regardless of conference (substitute "region"), which I take to mean that it's irrelevant if a bunch of at large teams (or seeds) come from one conference or, conversely, no at large teams (or seeds) come from a conference. The reason the West Region gets the shaft these days in Women's Soccer is that the West Region, by a lot, is the strongest region. If that ever changes, then it will be some other region getting the shaft.
The only way there would be blanket discrimination would be if the West Region were the strongest in all sports. I'm sure that isn't true right now in lacrosse and ice hockey.
This is why I put "East Coast Bias" in quotes. There just is no solid proof of this. However, many fans do carp about "East Coast Bias" in basketball and in football, which is why I mention it at all.
That there is bias in college sports, is a reasonable conclusion to make. You've proven it in Women's Soccer. I'm just making the suggestion that the best way to put pressure on the NCAA to address said bias, is to involve the press. The only way to involve the press seems to me to be to prove bias in Men's Basketball and in Men's Football (BCS). The press (and the NCAA) simply won't care about bias in Women's Soccer; but, by showing bias accross the board with all sports (or with a significant number of sports), you can pressure the NCAA to change THE WHOLE system. Thereby getting the system change in Women's Soccer. I, again, applaud your efforts and encourage you to continue and to broaden your investigation. I really wish I could help, but I am just not at that level when it comes to math.
Auto Pilot wrote:SciFi, I am not sure investigations would produce much more than we already know about bias in women's soccer. Now, if money were changing hands that would be a horse of a different color.
Also, as you state, anything to do with basketball or the fairly recent BCS, you would have coaches and sports commentators clammoring across the nation.
Others may disagree with me, but I have, for a long time, suspected that there are and have been shenanigans going on in the NCAA. I think investigations could turn up A LOT of shady activity.
-SciFi
Re: Evaluating Jones' SoccerRatings and the RPI
Today, I worked on analyzing the Massey rating system. Massey's system is important because it is one of the systems included in the "computer" rating used for football BCS purposes. The result is extremely interesting, at least to me. Massey's and Jones' ratings' correlation with actual 2008 regular season game results is exactly the same. In fact, although their team rankings do not match exactly, in every game (of the more than 3,000 total games) they had the same team with a higher ranking.
I also should note that although I reported Jones as having a 73.3% correlation with actual game results, that is based on comparing his ratings covering games through the first weekend of the NCAA Tournament to results through the end of the regular season. Since the ratings were based on a slightly different set of games than the games I applied them to, I would expect that Jones' correlation would be slightly better if I'd been able to apply ratings through the end of the regular season to the regular season's games. (Jones did not publish ratings through the end of the regular season.) I had the same problem with Massey.
So, Jones, Massey, and the unadjusted (and adjusted) RPI have equally good correlations to regular season results as a whole. However, although Jones and Massey both are like the RPI in that they overrate some regions and underrate others, their overrating and underrating are significantly less than those of the RPI.
I also should note that although I reported Jones as having a 73.3% correlation with actual game results, that is based on comparing his ratings covering games through the first weekend of the NCAA Tournament to results through the end of the regular season. Since the ratings were based on a slightly different set of games than the games I applied them to, I would expect that Jones' correlation would be slightly better if I'd been able to apply ratings through the end of the regular season to the regular season's games. (Jones did not publish ratings through the end of the regular season.) I had the same problem with Massey.
So, Jones, Massey, and the unadjusted (and adjusted) RPI have equally good correlations to regular season results as a whole. However, although Jones and Massey both are like the RPI in that they overrate some regions and underrate others, their overrating and underrating are significantly less than those of the RPI.
Re: Evaluating Jones' SoccerRatings and the RPI
Add my thanks for the great work UPSF.
One question with respect to regional bias: What if regions vary in the quality of their teams that undertake interregional play? For example, suppose hypothetically that in the West region only the best teams play Eastern teams and the weaker teams do not. For example, it's probable that Portland, UCLA, Stanford, etc. play a more "national" schedule than Gonzaga, Oregon, St. Mary's, etc. But further suppose that this is less true in other regions, perhaps because the distances are smaller. In the East, many schools can get to an inter-regional game on a bus.
This would seem to lead to the West teams having a better inter-regional record than teams in the other region simply because only the best teams in the West are playing inter-regionally.
So I have two questions that I could probably answer with enough head scratching over your formulas but that you can surely answer directly:
1. Do these differences exist?
2. Do they matter for your regional RPI rankings or for the Jones and Massey ratings?
One question with respect to regional bias: What if regions vary in the quality of their teams that undertake interregional play? For example, suppose hypothetically that in the West region only the best teams play Eastern teams and the weaker teams do not. For example, it's probable that Portland, UCLA, Stanford, etc. play a more "national" schedule than Gonzaga, Oregon, St. Mary's, etc. But further suppose that this is less true in other regions, perhaps because the distances are smaller. In the East, many schools can get to an inter-regional game on a bus.
This would seem to lead to the West teams having a better inter-regional record than teams in the other region simply because only the best teams in the West are playing inter-regionally.
So I have two questions that I could probably answer with enough head scratching over your formulas but that you can surely answer directly:
1. Do these differences exist?
2. Do they matter for your regional RPI rankings or for the Jones and Massey ratings?
SoreKnees- First man off the Bench
- Number of posts : 685
Age : 71
Location : Portland
Registration date : 2008-02-05
Re: Evaluating Jones' SoccerRatings and the RPI
Forget what I said about Massey's ratings last night, I made an error and it turns out his rating strength comparisons do not match exactly with Jones. In addition, Massey's ratings correlate very slightly better with actual results than Jones'. That means I have to do separate computations for how Massey treats the regions.
Soreknees, your questions are interesting. You're absolutely correct that for the Mid Atlantic region, as an example, it abuts three other regions which makes having inter-regional games relatively easy. And, as you point out, the travel distances are much smaller so that teams with low budgets (presumably some of which are lower-ranked teams) are more able to travel. However, as I think about it, why would it explain one region's teams out-performing the RPI and other regions' teams under-performing? At least as far as I've gotten, I don't think it would.
Soreknees, your questions are interesting. You're absolutely correct that for the Mid Atlantic region, as an example, it abuts three other regions which makes having inter-regional games relatively easy. And, as you point out, the travel distances are much smaller so that teams with low budgets (presumably some of which are lower-ranked teams) are more able to travel. However, as I think about it, why would it explain one region's teams out-performing the RPI and other regions' teams under-performing? At least as far as I've gotten, I don't think it would.
Re: Evaluating Jones' SoccerRatings and the RPI
The reason I thought it might is that inter-regional comparisons by necessity rely on the outcomes of inter-regional games. If one region only sends its best teams to inter-regional games, then it will have a better record than regions that send a balanced mix of teams to play other regions. This wouldn't be a factor if only the top teams from the West played Eastern teams as long as they only played the top teams in the East. But if the top teams in the West were feasting on weaker Eastern teams and that was the only inter-regional play for the West Region, then it seems like it could give an overly optimistic picture of the West's quality.
Does this make sense or am I worried about nothing?
Does this make sense or am I worried about nothing?
SoreKnees- First man off the Bench
- Number of posts : 685
Age : 71
Location : Portland
Registration date : 2008-02-05
Re: Evaluating Jones' SoccerRatings and the RPI
It seems that what UPSF is talking about as I see it is the difference between what the rating system say about who should win and who actually does.
If a strong West team plays a patsie, the rating system should reflect a win for the West team. If it plays a supposedly stronger team, it should lose.
Apparently UPSF has shown that many more "upsets" happen than should when the West plays out of region. That would indicate a skewed rating system.
And a rating system that takes regional diferences into account should show the same percentage of upsets whether a team is playing interregional games or intraregional games.
If a strong West team plays a patsie, the rating system should reflect a win for the West team. If it plays a supposedly stronger team, it should lose.
Apparently UPSF has shown that many more "upsets" happen than should when the West plays out of region. That would indicate a skewed rating system.
And a rating system that takes regional diferences into account should show the same percentage of upsets whether a team is playing interregional games or intraregional games.
Geezaldinho- Pilot Nation Legend
- Number of posts : 11852
Location : Hopefully, having a Malbec on the square in Cafayate, AR
Registration date : 2007-04-28
Re: Evaluating Jones' SoccerRatings and the RPI
Thinking about this some more, it seems to me a comparison of accuracy should only include perhaps the top 100 teams or so.
(maybe 100 isn't the right number, but it would show more than ranking the whole field)
The stated purpose of the RPI is to select the best teams for competition in the tournament, so it makes no difference if the RPI is correctly ranking teams that have no chance to make the tournament. Whether a team is ranked correctly at 320 or 267 is of no concern as far as the Championship is concerned.
That would presumably be the top 50 or so which currently get in because of their ranking, and you could double that to be sure you are counting all possible candidates.
I suppose that might exclude some automatic qualifiers, but they'd get in no matter what their RPI was.
(maybe 100 isn't the right number, but it would show more than ranking the whole field)
The stated purpose of the RPI is to select the best teams for competition in the tournament, so it makes no difference if the RPI is correctly ranking teams that have no chance to make the tournament. Whether a team is ranked correctly at 320 or 267 is of no concern as far as the Championship is concerned.
That would presumably be the top 50 or so which currently get in because of their ranking, and you could double that to be sure you are counting all possible candidates.
I suppose that might exclude some automatic qualifiers, but they'd get in no matter what their RPI was.
Geezaldinho- Pilot Nation Legend
- Number of posts : 11852
Location : Hopefully, having a Malbec on the square in Cafayate, AR
Registration date : 2007-04-28
Re: Evaluating Jones' SoccerRatings and the RPI
SoreKnees, the Geez is right. The RPI is supposed to take into consideration both winning record and strength of schedule and to balance them correctly so as to rank teams properly based on both of them. So, if top West Region teams are feasting on weak Mid Atlantic Region teams, the West Region teams' excellent winning records will be balanced out by their poor strengths of schedule. That is the fundamental theory of the RPI and it is what the NCAA claims the RPI does in as fair and accurate a way as is reasonably possible. So, as Geez says, if the RPI is working correctly, when you look at how West Region teams are doing in relation to their RPI rankings and compare that to how other regions' teams are doing, you would expect the West Region's teams to be winning the same percentage of games on average as the other regions' teams are winning regardless of who they have been playing in their inter-regional games since their ranking already is supposed to have taken into account who they have been playing.
Geez, I agree with you that the testing process should look at the top 100 teams, too, probably broken down into some discrete groups to allow an analysis based on likely seeded teams and likely at large teams. That's on my "to do" list to see how the various rating systems treat those teams on a regional basis. I've already done it overall (as distinguished from regionally) for the unadjusted, adjusted, and non-conference RPIs as well as for my first run at a non-regional RPI (but not for Jones or Massey). I can tell you that looking only at the top 100 as ranked by each system, the rankings' correlations with game results are slightly better than the correlations when looking at all 318 teams, but higher by only a couple of percentage points. Further, when looking at the correlations for groups of ten (teams ranked 1-10, 11-20, 21-30, etc.), the correlations are best for the higher ranked teams and poorest for the lower-ranked teams.
Geez, I agree with you that the testing process should look at the top 100 teams, too, probably broken down into some discrete groups to allow an analysis based on likely seeded teams and likely at large teams. That's on my "to do" list to see how the various rating systems treat those teams on a regional basis. I've already done it overall (as distinguished from regionally) for the unadjusted, adjusted, and non-conference RPIs as well as for my first run at a non-regional RPI (but not for Jones or Massey). I can tell you that looking only at the top 100 as ranked by each system, the rankings' correlations with game results are slightly better than the correlations when looking at all 318 teams, but higher by only a couple of percentage points. Further, when looking at the correlations for groups of ten (teams ranked 1-10, 11-20, 21-30, etc.), the correlations are best for the higher ranked teams and poorest for the lower-ranked teams.
Similar topics
» SoccerRatings
» USD's Rob Jones to Transfer
» Week 6 polls and RPI
» The Curse of Rob Jones
» Fun(?) Fact about San Diego's Rob Jones
» USD's Rob Jones to Transfer
» Week 6 polls and RPI
» The Curse of Rob Jones
» Fun(?) Fact about San Diego's Rob Jones
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum