Teams' Trends

Discussion in 'Women's College' started by cpthomas, Nov 22, 2018.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Over on the Hot Seat thread, Collegewhispers asked some good questions. I'm setting up a new thread here for discussion in order to avoid hijacking the Hot Seat thread. Here are the questions:

    "Out of curiosity do you have any information on what newer programs (last 7-9 years perhaps) look like in comparison to each other? Do some have significantly better or worse trends than others?

    Also, over the last 3-4 seasons have there been many programs that have significantly changed their trajectories for better or worse? I’m assuming there are a fair few just curious though how many there are. I suppose ‘significant’ is subjective but maybe just as a general common sense guideline any programs that have moved significantly up or down. May see a pattern of coaches on the hot seat and those doing well enough to move onwards and upwards."

    I'll do this in a couple of parts. The question came from a discussion about Colorado State, which is a "new program," as distinguished from a "transitioning program," so I'll give the info I have on that question in this post.

    For each team, I have a chart that shows its ARPIs from year to year and a statistical straight line trend line that represents the team's progress, for good or for ill. The trend line shows the team's trended rate of improvement or decline. I also have the year the team started and the team's rank for its first year and for its most recent year so you can see the rank change. I also have the team's conference and the conference's 2018 ARPI rank, since I think that's important context for the numbers, as I'll explain after I show the numbers.

    This is for teams that have new soccer programs, not for teams that had soccer and re-classified to Division 1.

    Chicago State (2013), Western Athletic Conference, which is conference ranked #29. Trend rate of -0.0110 per year (meaning its trended ARPI rating is getting poorer by -0.0110 per year). 2013 ARPI rank of 329, 2018 rank of 334.

    Colorado State (2013), Mountain West #16. +0.0175 per year. Rank improvement from 297 to 179.

    Hampton (2015), Big South #23. +0.0350 per year. Rank improvement from 333 to 325. (Note: Although this is a big improvement per year, teams are spaced quite far apart at the bottom of the rankings where Hampton is -- at the top, too -- so the big improvement rate doesn't result in a big rank improvement.)

    Illinois Chicago (2014), Horizon # 28. +0.0014 per year. Rank improvement from 301 to 299. Basically, the program has stayed about the same.

    Kansas State (2016), Big 12 #1 (Independent for first year). +0.0125 per year. Rank improvement from 205 to 158.

    New Mexico State (2009), Western Athletic Conference #29. -0.0089 per year. Rank loss from 197 to 321.

    Texas Corpus Christi (2013), Southland #18. +0.0139 per year. Rank improvement from 307 to 230.

    Texas RGV (2014), Western Athletic Conference #29. +0.0312 per year. Rank improvement from 300 to 134.

    UMKC (2009), Western Athletic Conference #29. +0.0164 per year. Rank improvement from 315 to 130.

    Comment: Of these teams, I think that the rank improvements for the WAC teams are suspect. WAC is one of the poorest conferences. One of the RPI's biggest problems is that it overrates teams from the weaker conferences and underrates teams from the stronger conferences. So, I suspect that the WAC teams are overrated and thus over-ranked. It's possible that Kansas State, conversely, is underrated and under-ranked.

    Additional Comment: The current rank difference between Kansas State's #158 and Colorado State's #179 is relatively insignificant in that area of the rankings. They're roughly in the middle, where teams' ratings are very compressed, so a difference of ~20 ranking positions represents a small rating difference. (I see that Kansas State at home beat Colorado State 3-2 in 2017.)

    Of course, this is only one slice of evidence on how these teams are doing. I think another factor, especially for how a team does in its first year, is how much advance time the coach had to put the first year's team together. I assume the team's conference also makes a difference -- it seems likely it's easier to attract good players to a Power 5 conference new team than it is to a lower level conference new team. There are lots of variables.
     
    L'orange repped this.
  2. cmonyougulls

    cmonyougulls Member

    Nov 24, 2011
    Club:
    Corinthians Sao Paulo
    Massey Ratings for CSU - trend is definitely improving year on year.
     

    Attached Files:

  3. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Good post cmonyougulls.

    After a lot of thinking and doing some comparisons of RPI ranks to Massey's and my 5 Iteration RPI's ranks, I'm going back to the drawing board on team trends. I've concluded that the RPI's problems (difficulties ranking teams from different conferences in relation to each other) are causing problems with the trends that Massey and my 5 Iteration RPI (which are reasonably similar) don't have. So, I'm going to redo all my trend tables and charts (that's for 335 teams) using one of them, probably the 5 Iteration RPI. And, I'm going to use teams' rank trends rather than their rating trends since ranks do a better job of communicating where a team stands than their ratings do.

    I'll be back in a while, after I've redone everything. Ugh!
     
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #4 cpthomas, Nov 24, 2018
    Last edited: Nov 25, 2018
    I'm trying to figure out how to post pictures of excel charts here without going through a complicated process I don't like. So, there's a chart below that I want to use to introduce the kind of information I'm going to post here. I can see the chart exactly as I want it, but please let me know if you can't see it so I can know whether it works. Below the chart, I have some explanation and comments.

    upload_2018-11-24_18-16-3.png

    EXPLANATION

    In this chart, I used Abilene Christian mainly because it's first in the alphabet for Division I teams.

    The chart shows the team's rank history since 2014 because that's the first year it had a team in Division I. This was not a "from scratch" team, but rather was a program reclassifying to Division I.

    The vertical axis is teams' ranks. There currently are 335 teams (increasing to 337 next year), and for ease of reading I've set the line markers in 20 rank position increments, thus showing lines from rank 0 to rank 340. By doing this, if you compare the chart for one team to the chart for another, they'll both be using the same scale.

    The horizontal axis is calendar years. Year 1, for Abilene Christian, is the first year it had a Division team, which was 2014. For most teams, Year 1 will be 2007, which is the year I began collecting data. Both at the top in the chart title, and at the bottom in the legend, I'll always indicate which year Year 1 refers to. Also, in the horizontal axis legend, I'll identify the coach as of the most recent year (in this case 2018) and the year the coach became the team's head coach. In the Abilene Christian chart, Casey Wilson has been the coach since it entered Division I in 2014. For many teams, the coach will not have been head coach for the entire period covered by the chart. You'll be able to look at the legend, see when the current coach became head coach, and see how the team was doing in the period before he/she became head coach and how the team has done in the period after he/she became head coach. I've also included the team's current conference in the legend.

    The chart shows the team's rank for each year covered by the chart, in the dot data points. It shows the variation in the ranks over the years in the solid line that connects the data points. It also shows the trend of the team's ranks in the dotted straight line. This is a computer generated line that statistically represents the data points using the assumption that over the time covered by the chart, the team's strength has been moving in a consistent direction notwithstanding the ups and downs from year to year. When a new coach comes in, that may not be a correct assumption, so it's important in that case to look to see the rank pattern before and after the coach came in. Sometimes, I may post two charts, one for the period before the new coach came in and one for after, so you can see if there's been a trend change accompanying the coaching change.

    There are three sets of solid/dotted lines on the chart, as indicated at the bottom of the chart. The blue set is for Massey rankings. I have done detailed analyses of different rating systems to see which are the best. In my opinion, Massey's are the best for Division I women's soccer. The orange set is for the current NCAA's Adjusted RPI formula used for Division I women's soccer. (I've applied this formula retroactively for prior years.) The grey set is for the 5 Iteration ARPI formula I've developed.

    At the right side ends of the blue and orange trend lines are formulas. In each formula, y is a team's trended rank for any year. x is the year of the rank. For example, in the above chart, if you want to know the team's trended rating in year 1 on the chart, you would substitute 1 for x in the formula. Since the chart runs to 2018 (year 5 in the above chart), if you want to know the team's trended rating for 2019, you would substitute 6 for x in the formula. The last item in the formula is a number. This number is what the trend line rank would be at year 0 on the left side of the chart, if there were a year 0. (I haven't included a formula for the grey line since it would clutter the chart up so much you wouldn't be able to read the other formulas.)

    In the trend line formula, a key number is the one that goes with the x. That number is the number of rank positions a team is improving by or declining by per year, according to its trend.

    COMMENTS

    Comparing the blue and orange data and trend lines in the above chart, Massey consistently shows Abilene Christian with poorer rankings than the NCAA's ARPI. Looking at the grey line, the 5 Iteration ARPI has rankings somewhere between the two. This is somewhat typical but not universal. Here's why:

    1. The NCAA's ARPI has a structural problem, due to how it calculates strength of schedule. The problem causes it, on average, to underrate teams from strong conferences and overrate teams from weak conferences. Massey does not have this problem, in other words it rates teams from the different conferences properly in relation to each other. My 5 Iteration ARPI also has mostly corrected this problem. Thus the NCAA's ARPI ranks for teams from weaker conferences often are better than they should be. Massey's ranks are just about right. And the 5 Iteration ARPI's ranks are in between. According to the NCAA's ARPI, Abilene Christian's Southland conference is the #18 conference; the 5 Iteration ARPI says it's #20; and Massey says it's #25. Thus the NCAA's ARPI has a good chance of giving Abilene Christian too high a rank.

    2. In order for teams from different conferences and regions of the country to be properly rated in relation to each other, there must be enough inter-conference and inter-regional games -- there must be enough "correspondence" among conferences and regions. If there aren't enough, rating formulas will tend to underrate teams from strong conferences and regions and to overrate teams from weak conferences and regions. In Division I women's soccer, there isn't enough correspondence among conferences and regions to overcome this tendency for most formulas. Massey is pretty much able to overcome this problem, I have no idea how. The NCAA's ARPI is not able to overcome the problem. And my 5 Iteration ARPI again appears to fit somewhere in the middle.
    The end result of this is that Massey's ranks for teams are the most reliable, as they don't have the NCAA ARPI's strength of schedule problem and they pretty much are able to overcome the insufficient correspondence problem. The 5 Iteration ARPI's ranks are the next most likely to be correct. The NCAA ARPI's ranks are the least reliable.

    I've included all three in my basic charts since Massey is the most reliable, the NCAA's ARPI is what the NCAA uses, and my 5 Iteration ARPI is an improvement over the NCAA's ARPI that the NCAA could use without giving up its basic RPI formula.
     

    Attached Files:

  5. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    As I go through the process of re-doing my trend charts, in alphabetical order, I'll post occasional ones that seem interesting, either because of what they may communicate about teams' and coaches' success or lack of success or because they may show something significant about how to evaluate teams' and coaches' trends. Here's the chart for American, which I found interesting:

    upload_2018-11-25_10-33-1.png
    As you can see, Massey, the ARPI, and the 5 Iteration ARPI all rank American about the same. It's a Patriot League team. My system for evaluating how well a rating system's ratings correlate with actual game results says that all three of the rating systems rate the Patriot League teams, on average, almost exactly right. That being the case, I would expect their rankings for at last most Patriot League teams to be about the same, and that's the case with American.

    As the chart shows, American's rankings got worse pretty consistently over the first 11 years covered by the chart. As the chart indicates, Hering became the head coach in 2013. For her first five years, the trend shown in the chart continued, so as of the end of the 2017 season her Athletic Director could have concluded that she wasn't going to be able to change American's negative direction. But look at 2018. Massey has American with an improvement of 99 ranking positions, the NCAA's ARPI 81, and the 5 Iteration ARPI 84. So, after 6 years (2018), her tenure has a different look and an athletic director easily could think that maybe it's just taken this long to turn the program around and she should get a longer look to see if 2018 is just an anomaly or if it has significant meaning.

    What this gets at, in part, is the question of how long it takes to turn around a program in a long term decline. Some have asserted that a new coach should be able to do it right away. Others have said it takes a lot of time. In reality, it probably depends on the specifics of the program the coach inherited.

    This also gets at the question of how you know when a program has changed direction. One year's results in most cases aren't enough to show a turn around. But they may be an indicator that possibly a turn around has begun.

    What I take away from a chart like this is that if you pay close attention to and understand the numbers, there's not a clearly right decision for an AD to make. If it were me, I'd be inclined to wait another year. If 2018 is an anomaly, they won't be much worse off with one more year of decline; and if 2018 indicates a turn around, let's see if she can take it further.

    Of course, whether the American AD pays attention to stuff like this, I don't know.
     
  6. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'm going to post my Appalachian State chart because it is a good one for making a point about the NCAA's ARPI as compared to Massey and the 5 Iteration ARPI.

    upload_2018-11-25_14-44-0.png

    As the chart shows, the NCAA's ARPI has a tendency to overrate, and thus give better ranks, to Appalachian State than Massey and the 5 Iteration ARPI -- the lower on these charts, the better the rank.

    My program for evaluating a rating system looks at each game played, of which I have about 36,000 in my data base. For each game, it considers the two teams' ratings, as adjusted for home field advantage. It then looks to see whether the better rated team won, tied, or lost the game. After doing that, it looks at different aspects of how the ratings do:

    1. It looks overall (and in various breakdowns) at how well the ratings correlate with the game results:

    a. For the NCAA's ARPI, its ratings adjusted for home field advantage have the higher rated team winning 72.6% of the time, tieing 10.7%, and losing 16.7%

    b. For Massey, the numbers are 73.1%, 10.7%, and 16.2%.

    c. For the 5 Iteration ARPI, the numbers are 73.2%, 10.7%, and 16.1%.

    You might conclude that these numbers by themselves show that the NCAA's ARPI is a poorer rating system, but they really don't show much. The difference between the NCAA's ARPI and Massey is 0.4%. For one year's games, each 0.1% represents 3 games, so what the numbers show is that Massey gets about 12 more game results right than the NCAA's ARPI each year, out of about 3,000 games. That's not much.
    2. It looks at how well the ratings correlate with the game results, but limited to games involving at least one Top 60 teams:

    a. For the NCAA's ARPI, the "higher rated team wins" number is 77.8%.

    b. For Massey it is 77.6%.

    c. For the 5 Iteration ARPI it is 77.8%.

    Here, each 0.1% represents 1 game, so the NCAA's ARPI gets about 2 more game results right than Massey each year, out of about 1,000 games. Again, that's not much.
    3. What points 1 and 2 show is that, in terms of how well ratings correlate overall with game results, there really isn't much difference among rating systems. This doesn't mean, however, that there aren't important differences among systems.

    Every system is going to miss results, it's unavoidable. In an ideal system, the missed results will be randomly distributed so that no team and no group of teams is discriminated against or in favor of by the system's inability to get every game result right.

    4. The problem with the NCAA's ARPI is that its missed results are not randomly distributed. Rather, they are distributed in a way that produces ratings that discriminate, on average, against teams from stronger conferences and in favor of teams from weaker conferences. My evaluation program also measures this discrimination. For each conference's teams, it looks at all of their non-conference games. For each game, it considers the two teams' ratings, as adjusted for home field advantage. It then looks to see whether the better rated team won, tied, or lost the game. From there, it looks to see, of the games the conference's teams were supposed to win, what percentage did they actually win; and of the games the teams were supposed to lose, what percentage did they actually tie or win. From these numbers, the program calculates a performance percentage. A performance percentage of 100% means the conference's teams, on average, performed exactly in accord with their ratings. A performance percentage above 100% means the conference's teams had better results in more games than the ratings say they should have. And, a performance percentage below 100% means the conference's teams performed more poorly than the ratings say they should have. The program calculates the performance percentages of all the conferences and then compares them to the conferences' strength as determined by the conferences' average ratings. If conferences performance percentages are about the same across the board, then the rating system is rating the conferences' teams fairly in relation to each other. This is the case for Massey and, to a lesser extent, for the 5 Iteration ARPI. On the other hand, for the NCAA's ARPI, the comparison shows something completely different: stronger conferences tend to have performance percentages above 100% and weaker conferences tend to have performance percentages below 100%. In other words, the NCAA's ARPI, on average, underrates teams from weaker conferences and overrates teams from weaker conferences.
    In the chart above for Appalachian State, you can see the effect of this. It is in the Sun Belt conference. For the NCAA's ARPI, the Sun Belt's performance percentage is 90.8% (NCAA's ARPI ranks the conference as #14). In other words, its teams, on average, perform more poorly than their ratings say they should perform. For Massey, its performance percentage is 102.4%, meaning its teams perform pretty closely to how Massey's ratings say they should perform (Massey ranks the conference as #16). For the 5 Iteration ARPI, its performance percentage is 98.3% (it ranks the conference as #16).

    These differences explain the above chart. In the chart, the NCAA's ARPI ranks assigned to Appalachian State in most cases are better than the ranks the other systems assign. And, Appalachian State's trend line for the NCAA's ARPI makes them look like a stronger team than do the trend lines for the other systems. Which trend line is correct? Of the three, Massey's is closest to correct because Massey's system is relatively free of discrimination. Next comes the 5 Iteration ARPI, which is close to Massey but not quite as good. The NCAA's ARPI is the poorest, but a good margin.

    What this means, when evaluating teams' trends, is you need to be looking at the correct trend line -- which would be Massey's. Or, if you're looking only at NCAA ARPI trend lines, you need to take the teams' conference strength into account and adjust your thinking accordingly.
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    And, to give a picture of the other end of the spectrum from Appalachian State, here is the chart for Arizona:

    upload_2018-11-25_16-1-48.png

    As the chart shows, the NCAA's ARPI in most cases has assigned a poorer rank than it should. This is consistent with the Pac 12's performance percentage:

    For the NCAA's ARPI, performance percentage of 110.5%, so its teams are underrated;

    For Massey, performance percentage of 103.6%, so the Pac 12 teams' ratings, on average, are close to right although they're still slightly underrated; and

    For 5 Iteration ARPI, performance percentage of 104.5%, so significantly better than the NCAA's ARPI, but not quite as close to correct as Massey.
    Thus if you want the best read on where the Arizona program is going, you'll use Massey's ranks and trend line. (It's also possible to produce a chart the covers the period since Arizona made a coaching change, or perhaps better starting with the year before the change. That requires a bit of extra work, however, so at this point I'll wait for other cases to do that.) Of, if you're looking only at the NCAA's ARPI rankings, you'll need to take into account that Arizona's in the Pac 12 so the NCAA ARPI rankings are not going to be as good as they should be.
     
    Fitballer repped this.
  8. Collegewhispers

    Collegewhispers Member+

    Oct 27, 2011
    Club:
    Columbus Crew

    This is very interesting. I wonder if a lot of losing programs (like really hadn’t losing programs) take around 6 years to turn it around. Makes sense in some ways that it takes several years to recruit enough good kids to trend upwards. American were bad for many years, but this shows that maybe it just takes some patience and the direction of a team can be turned. Great stuff CP!
     
    Fitballer repped this.
  9. Cantcoach

    Cantcoach Member

    Barcelona
    United States
    Dec 29, 2017

    I don’t think it necessarily takes that long to turn a program around- South Dakota has a new coach this year that immediately turned the program that was a traditional Division I struggler into a 9 win season so can be done. This is just 1 season so over time will be interesting to see if they can sustain but it seems very possible.
     
    Fitballer repped this.
  10. Carolina92

    Carolina92 Member

    Sep 26, 2008
    I feel like another good measure is to look at the performance of the Womens Soccer program in relation to the other sports programs at the school and the overall Athletics performance. Massey keeps all this info and ranks athletics programs. You can see which sports/teams are dragging school's rankings down or giving them a big boost. ADs eat that stuff up. Interesting to see what programs are radically under/overperforming for schools in relation to the athletics environment at the university overall.
     
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here's a first post on a coach whose team trend suggests he might be a possibility to move up a level: Craig Roberts at Ball State.

    The first chart is for Ball State since 2007:

    upload_2018-11-26_15-32-15.png
    This is the same format as the tables in earlier posts, except that I've added something to the horizontal axis title that begins "Year 1 = 2007." In that title, after the team's conference, in this case Mid American, I've added some numbers. The numbers are for the performance percentages for Ball State of the three rating systems on the chart. I explained the performance percentages in an earlier post. The PP for the NCAA's ARPI is 97.8%, meaning that rating system, on average, very slightly overrates teams from the Mid American conference. For Massey it is 100.8%, which means Massey, on average, rates teams from the Mid American almost exactly right. For my 5 Iteration ARPI it is 102.1%, which means that system, on average, very slightly underrates teams from the conference. The three rating systems are very similar in how they show Ball State's rank trend, which is what one would expect from the similarity of performance percentages.

    You can look at this chart and see that Roberts came in and changed the direction of a team that was tanking into a team that is improving.

    To better show how Ball State's direction has changed under Roberts, here's a chart that covers only 2007 through 2009, the three years before he arrived:

    upload_2018-11-26_15-40-55.png
    As you can see from the blue (Massey) trend line formula on the right side of the page by the trend line, over this time period Ball State's ranking was deteriorating at the rate of 74.5 rank positions per year.

    And, here's a chart that starts with 2009, the year before Roberts arrived, and runs to 2018. I started with the year before he arrived to show how Ball State has done as compared to where they were right before he arrived.

    upload_2018-11-26_15-45-4.png
    As you can see from this chart, since 2009 Ball State has been improving at the rate of 15.667 ranking positions per year. This is as compared to the previous chart's deterioration at the rate of 74.5 positions per year.

    Together, the charts show that something quite good has been happening at Ball State since Roberts arrived. It seems like that would make him a potential candidate for a job at the next level.

    As I go through the alphabet of teams, I'll be posting charts like these for teams/coaches with patterns similar to this one.
     
  12. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    It looks to me like Central Arkansas' Jeremy Bishop might be a candidate for moving up to the next level. The first chart is for Central Arkansas from 2007 through 2018. The second is for 2007 through 2013, the year before Bishop arrived. The third is for 2013 (the year before Bishop arrived) through 2018. He inherited an improving team, at the rate of about 7 rank positions per year. He has the team at an improvement rate of 29 positions per year.

    upload_2018-11-26_22-21-25.png

    upload_2018-11-26_22-22-5.png

    upload_2018-11-26_22-22-45.png
     
  13. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Neil Stafford at Cincinnati is a possible candidate for moving up, but the team's 2018 season might give cause for hesitation. He arrived there in 2013. The team's trend from 2007 through 2012 was a decline of about 10 ranking positions per year. Looking at 2012 (the year before he arrived) through 2018, the team's trend is an improvement of about 10 ranking positions per year. But, 2018 shows a significant drop (that nevertheless is incorporated into the 10 positions per year improvement since he took over). If Cincinnati next year can recover the ground it lost this year, he might look like a more viable upward moving candidate.

    Note: For teams in the American Athletic Conference, I don't show performance percentages. That's due to its being a relatively new conference for which I don't have enough data to make performance percentages reliable. Massey's ratings/ranks should be the best for the American, as they are the best overall.

    upload_2018-11-27_10-41-16.png

    upload_2018-11-27_10-41-45.png
    upload_2018-11-27_10-42-27.png
     
  14. Collegewhispers

    Collegewhispers Member+

    Oct 27, 2011
    Club:
    Columbus Crew

    Neil has done really well at Cinci. Guy works his ass off but I don’t see him leaving this year (I agree with you he is ready for another step though). He has a good gig there and it’s been impressive to see the progress that program has made.
     
  15. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here's the College of Charleston's chart, which I'm posting as a case study for when an AD should or shouldn't make a decision about a coach:

    upload_2018-11-27_16-32-52.png
    As you can see from the chart, Michner started in 2010. He had a good first year, but then it looks like things went to hell. Right around 2013-2014, his Athletic Director might have been wondering if it was time for a new coach. But if you look at the longer term, 2014 now looks like the beginning of a trend towards better and better ranks. 2018 is Michner's year 9, and given the direction over the last 5 years, there's the possibility for even more improvement in year 10.

    So, how much time does it take to see what a coach can do with a program?
     
  16. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here are three charts for Colorado, the first showing their ranks since 2007 and the next two showing their ranks and trends under coaches Hempen (through 2011) and Sanchez (since 2012) respectively. There's been discussion about the Hempen to Sanchez transition on the Hot Seat thread. The charts prove Norfolk's assertion that the team has had a big improvement under Sanchez. In addition, however, and the reason I'm posting new charts here, is that they illustrate how the NCAA's ARPI, in its discrimination against strong conferences, can be a significant problem when it comes to the NCAA Tournament. The charts show that the NCAA's ARPI, with its discrimination against strong conferences, has significantly underrated Colorado. The NCAA's ARPI this year ranked Colorado #49. Massey, which has minimal discrimination against strong conferences, ranked Colorado #24 -- even though Massey itself still somewhat underrates Pac 12 teams. If the Women's Soccer Committee had looked at Colorado as ranked #24, I don't think there's any way they would not have been in the Tournament this year.

    upload_2018-11-27_22-43-34.png

    upload_2018-11-27_22-44-28.png

    upload_2018-11-27_22-45-13.png
     
  17. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Here's a chart for DePaul, which suggests Erin Chastain may be a candidate for moving up a level.

    Not related to her, a coach gave me an interesting perspective on what an athletic director at a higher level might consider in evaluating a successful coach at the next level down. Is the coach's team doing better than one ordinarily would expect given the various constraints under which the coach is operating? This could be indicated by the coach's team doing better than other successful teams in the same conference that have fewer constraints.

    upload_2018-11-28_16-57-36.png
     
  18. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Drexel's trend suggests Ray Goon may be a candidate for an upward move, although when I suggested this a year ago, others said they didn't think so. He's been there a long time, so he might not have any interest in moving, which always is a factor. In any case, Drexel's improvement trend is at the rate of 12 rank positions per year.

    Not related to Goon, but the coach I referred to in my preceding post also pointed out two important and different aspects of the head coach job. One is the coaching of the players. The other is "Chief Executive Officer" job of managing the overall program. Some may be good at one of these, some at the other, and some at both.

    upload_2018-11-28_18-55-58.png
     
  19. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Although Eastern Washington was off this year, Chad Bodnar still is looking like a candidate for moving up a level. The regular 3 charts:

    upload_2018-11-29_23-23-12.png

    upload_2018-11-29_23-22-5.png

    upload_2018-11-29_23-23-46.png
     
  20. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Florida Atlantic's Patrick Baker looks like another candidate for moving to a higher level. That is, if he wants to leave. The three charts:

    upload_2018-12-4_16-42-34.png

    upload_2018-12-4_16-43-12.png

    upload_2018-12-4_16-43-36.png
     
  21. L'orange

    L'orange Member+

    Ajax
    Netherlands
    Jul 20, 2017
    Very interesting stuff, CP--thanks for the analysis.
     
  22. Lord Kril

    Lord Kril Member

    Pittsburgh Riverhounds
    Jul 3, 2018
    I would love to see some ACC/SEC #s. BP at Tenn, MM at Ole Miss, CH at Arkansas. Wake, VT, BC
     
  23. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    No surprise, probably, this chart shows Jim Blankenship at FGCU as a possible candidate to move up a level, although things possibly have leveled out over the last 6 or so years:

    upload_2018-12-5_15-9-34.png
     
  24. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Todd Bramble at George Mason looks like another move up possibility, although it may be a little too soon to tell. The regular three charts since he came in about mid-way from 2007 are below. A reminder: The equation to the right on each chart defines the rate of change of the team's ARPI rank trend. For George Mason, looking at the period since 2007 as a whole, there is an improvement of about 3 rank positions per year, which looks pretty stable. From 2007 through 2014, however, before Bramble arrived, the trend is a loss of about 13 rank positions per year. From 2014 (the year before Bramble arrived) through 2018, the trend is an improvement of about 42 rank positions per year. Obviously, there will be at least a flattening out at some point. The question is where?

    upload_2018-12-5_15-38-10.png

    upload_2018-12-5_15-38-51.png

    upload_2018-12-5_15-39-22.png
     
  25. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I'm upgrading the charts in alphabetical order, so most of these aren't ready yet, but here is Arkansas and in a while I'll do Boston College. The 2007 through 2018 Arkansas chart is pretty good for the entire period, but I've also included the 2007 through 2010 period before Colby Hale arrived and the 2007 through 2018 period. The rate of rank improvement actually is greater in number of positions before he arrived, but that's deceptive. During that period, Arkansas was in the middle of the rankings where teams' ratings are compressed and a small rating improvement allows a team to jump up a bunch of positions. As teams near the top of the ratings (or the bottom), the ratings are more spread out and it takes a much more significant rating improvement to improve your rank. Something I wonder about, looking at the charts, is whether Arkansas is getting to a point where it's improvement is flattening out, but it's probably too soon to tell.

    upload_2018-12-5_16-8-31.png

    upload_2018-12-5_16-8-57.png

    upload_2018-12-5_16-9-57.png
     

Share This Page