NCAA D1 Tournament Bracketology

Discussion in 'Women's College' started by cpthomas, Dec 16, 2015.

  1. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I think it will be worth devoting a thread to Tournament bracket formation generally, as a place to discuss broader issues than those related to what the Committee did in specific years. I'm imagining that this could be a thread that carries over from year to year.

    I'm hoping this thread mostly will be serious discussion about the Committee, the factors the Committee is required to use in making at large selections and is allowed to use in seeding, any formal changes in those factors, any "informal" changes in the bracket formation decision process over time, and topics like that. Of course, anyone can post anywhere, and I know that people sometimes have serious beefs with particular Committee decisions, but I hope this thread won't be overwhelmed by posters simply venting when they're unhappy with the Committee. On the other hand, good faith questions about how the Committee possibly could have made such a dumb-ass decision are useful as they can create a chance for analysis of how the Committee's decision actually might -- or might not -- have been reasonable. What I'd like to avoid is posters who really aren't interested in what might be a rational explanations for decisions they don't like and simply want to gripe. There are plenty of other threads where they can do that.

    Anyway, with that in mind, shortly I'll put up a substantive post that hopefully will prove interesting at least to some of you.
     
  2. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    One aspect of the RPI is that, after the NCAA computes the basic RPI (the Unadjusted RPI in my parlance and the Normal RPI in the NCAA's), sports committee's have the authority to make adjustments. Not all sports use adjustments, but Division I women's soccer does. The adjustments are bonuses for good wins and ties and penalties for poor ties and losses.

    When I first started working on the RPI in 2007, teams' RPIs could be adjusted based on good or poor results in all games, both conference and non-conference. In addition, the adjustment scale was relatively high. The ranges of possible adjustments, per good or bad result, were from 0.0032 to 0.0008.

    In 2010, the Committee changed the adjustment amounts. The range of possible adjustments, per result, shrank to 0.0024 to 0.0002. I believe this was in response to critiques about the fairness of the adjustment process, from mid-major conferences. The basic point of the critique was that, as a matter of simple mathematics, teams from the major conferences had more opportunities to play teams against whom positive results would be in the bonus area than did teams from mid-majors. This point was true.

    In 2012, the Committee made another change to the adjustment system. It discontinued the RPI's giving adjustments for in-conference results. It also pretty dramatically changed the definition of opponents considered to be poor enough that ties or losses to them were considered worthy of penalties, so that far fewer penalties were applied to teams. These changes appear to have been in response to similar critiques from the mid-majors.

    (As an aside, what the Committee appears not to have recognized, which certainly is something the NCAA never has admitted, is that the RPI tends to discriminate against the major/strongest conferences. The bonus system partially, but not completely, offsets this discrimination. Thus with the Committee's changes, it actually increased the RPI's discrimination against the major/strongest conferences.)

    In 2013, the adjustment amounts changed once more, this time due to automatic changes that occur when the numbers of teams playing a sport increase. The changes were not very consequential, with the possible adjustments now ranging from 0.0026 to 0.0002.

    A significant question is whether these changes actually had any noticeable effect on the Committee's decisions.

    As I've posted about elsewhere, I've developed a system of "standards" that I use based on the Committee's NCAA Tournament decisions over the last nine years -- 2007 through 2015. Each standard is associated with one or two of the factors the Committee is required to use in making its NCAA Tournament at large selection decisions. If a team meets a "Yes" standard, it means a team that has met that standard in the past always has received a favorable decision from the NCAA for the decision the standard relates to -- for example, if a team over the last nine years has met at least one "Yes#1 Seed" standard,, it always has received a #1 seed. And, if a team meets a "No" standard it means the converse -- thus, if a team has met at least one "No #1 Seed" standard, it never has received a #1 seed.

    Following the 2015 decisions, I updated the standards to incorporate the Committee's 2015 Tournament decisions. In addition, the standards I'd been using were based on using, for each year going back to 2007, the RPI formula in effect for that year. This time, when I updated the formula, I used instead for each year the RPI formula currently in effect. I did this because it creates more of an apples-only derivation for the standards and it will make the standards more valid as a basis for application to the Committee's decisions next year in terms of whether those decisions are consistent with the Committee's past practice (assuming the Committee doesn't change the formula again). In doing this, I had to go back through all the years to see whether there were standards I needed to revise based on teams' rating changes. There were, although in most cases they were pretty small changes.

    Once I have the standards finalized, using a #4 seed as an example, a team can fit into any of three categories:

    1. The team meets at least one of the "Yes #4 Seed" standards. If it does this, then the way I set the standards it will meet no "No Seed" standards.

    2. The team meets at least one of the "No Seed" standards. It it does this, then it will meet no "Yes #4 Seed" standards.

    3. The team meets no "Yes" and no "No" standards.
    So, when I am looking at a prior season's end of regular season data (including conference tournaments), after going through the #1 through #3 seeds, I'll get to the #4 seeds. I'll look to see which teams (that the standards have not yet seeded as #1 through #3) meet at least one "Yes #4 Seed" standards. All of those teams received #4 seeds. I'll also look to see which teams meet at least one "No Seed" standards. None of those teams received #4 seeds. I'll then look at the list of teams that meet no "Yes" and no "No" standards. Any still open #4 seed slots will have been filled by teams from that list. And, when I get to next year, by the same process I can get a pretty good picture of who will and won't receive #4 seeds, if the Committee's decisions are to be consistent with its decisions over the last nine years. This thus is a good way (1) to make an educated guess about what decisions the Committee will make and to identify where the hard decisions are and (2) to see if the Committee is being relatively consistent from year to year.

    So, back to my earlier question about the Committee's bonus/penalty amount changes over the recent past and whether those changes actually had any noticeable effect on the Committee's decisions.

    Given that the Committee made its decisions in the 2007 to 2009 time period using what seem to be significantly different bonus/penalty amounts and a significantly different structure than currently, and that the standards I now am using are based on the current bonus/penalty amounts and structure, one might anticipate that for the years 2007 through 2009, there would be more decisions for the Committee to make involving teams that meet no "Yes" and no "No" standards than currently. If so, this would indicate that the bonus/penalty changes have had at least some effect on the Committee's decisions.

    So, I took a look at how many decision the standards "dictated" over each of the years from 2007 to 2015, how many were left to choices among the no "Yes" and no "No" lists, and also how many were on the no "Yes" and no "No" lists as compared to the numbers of open slots not filled by teams on the at least one "Yes" list.

    Without going into full detail, I found that there was nearly no change in the pattern over the last nine years. In fact, if anything, the Committee's decisions at the early end of the period were very slightly more determined by the standards and when the Committee had to make choices among teams on the no "Yes" and no "No" list, it had a very slightly narrower array of teams to choose from. Essentially, however, I do not see there having been any change over the years.

    What this suggests, to me, is that the Committee's changes in the bonus/penalty adjustment amounts and structure, and thus in teams' ratings and ranks, has had no measurable effect on the Committee's decisions.

    This suggests, though it doesn't prove, something more significant. It suggests that when the Committee gets to the point of making its hard decisions -- which of a small group of teams will receive the last of a few available particular seeds or the last of a few available at large selections, teams' exact RPI ratings and ranks don't make much difference.

    If this indeed is true, it's good. The RPI -- as is true of any mathematical ratings system -- cannot effectively rank teams in close proximity to each other. The Committee may or may not understand this, but it's decisions over time suggest that it does and that it must be looking at other factors than the RPI and the RPI ranks when it gets to making its tough decisions.

    Coming Next: Conference Ranks and Conference Standings and How They Affect the Decision-Making Process
     
  3. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Conference Ranks and Conference Standings and How They Affect the Decision-Making Process

    So, how do conference ranks, in terms of their average ARPIs, and teams' conference standings figure in the Committee's decision-making process?

    As a starting point, here's the Committee's general statement of their at large selection criteria (from the 2015 Pre-Championship Manual):

    "Selection Criteria
    "The following criteria shall be employed by a governing sports committee in selecting participants for NCAA championships competition [Bylaw 31.3.3; Criteria for Selection of Participants]:

    "* Won-lost record;

    "*Strength of schedule; and

    "*Eligibility and availability of student-athletes for NCAA championships.​

    "In addition to Bylaw 31.3.3, the Women's Soccer Committee has received approval from the NCAA Division I Championships/Sports Management Cabinet to consider the following criteria in the selection of at-large teams for the soccer championship (not necessarily in priority order):

    "Primary Criteria

    "* Results of the adjusted Rating Percentage Index (RPI);

    "* Results versus common opponents; and

    "* Head-to-head competition.
    "Secondary Criteria

    "If the evaluation of the primary criteria does not result in a decision, the secondary criteria will be reviewed. All the criteria listed will be evaluated.
    "* Results versus teams already selected to participate in the field (including automatic qualifiers with RPI of 1-75)

    "* Late season performance -- defined as last eight games including conference tournaments (strength and results).​

    "Recommendations are provided by regional advisory committees for consideration by the Women's Soccer Committee. Coaches' polls and/or any other outside polls or rankings are not used as a criterion by the Women's Soccer Committee for selection purposes."
    You may be noticing that this description says nothing about the Committee's considering conference strength and conference standing.

    But, take a look at how the Committee describes the RPI:

    "RPI. The committee uses the RPI (Rating Percentage Index), a computer program that calculates the institutions' Division I winning percentage (25 percent), opponents' success (50 percent), opponents' strength of schedule (25 percent) plus a bonus/penalty system. When evaluating the RPI, the committee may consider comparing data of individual teams, including, but not limited to, overall record, Division I record, overall RPI rank, non-conference record and RPI rank, conference regular-season record and conference tournament results. The RPI shall be used as a selection tool. The bonus/penalty structure for the RPI includes a bonus for non-conference wins or ties against the top 80 teams in the RPI and a penalty for a non-conference loss against the bottom 80 teams in the RPI."
    Thus the Committee considers "conference regular-season record and conference tournament results." I doubt the Committee really thinks of this as part of its consideration of the RPI, except to the extent that it uses conferences' average RPIs as a basis for ranking them.

    I think there's an important point, which is that the Committee doesn't just consider conference regular-season record. It doesn't just consider conference tournament results. It considers both.

    Also, the statement is that the Committee can consider "conference regular-season record," not that it considers "conference regular-season standing." Although I assume the Committee considers conference regular-season standing as a way to consider conference regular-season record, I think this language pushes the Committee in the direction of looking at who a team's conference opponents were, for teams from conferences that don't play full round robins. Thus for a non-full-round-robin conference, when the Committee looks at a conference team's regular-season standing, I expect this includes a look at the conference opponents the team played. If the team played a relatively weak conference schedule, for example, then the Committee may discount the team's conference standing to whatever extent it believes appropriate.

    One of the things I've wondered about is how the Committee treats teams that are tied in standings for a conference's regular season or that lose in the same round of the conference tournament. For example, if two teams tie for first in the regular season standings, does the Committee treat their accomplishments as equal to the accomplishment of a team that was outright winner of its conference regular season? I doubt that the Committee does that. Rather, my guess is that the Committee considers the teams that tied to have accomplished something less than an outright conference regular season winner. The way I evaluate such a regular season tie is to treat the two teams as occupying the 1 and 2 regular season standing positions. I add 1 + 2, divide by the number of teams tied (2), and come up with a 1.5 regular season standing for each team, as compared to a 1.0 position for a team that won its conference regular season outright. To me, this seems like a reasonable, and most likely to be accurate, way to think about a tie in the standings.

    I'm guessing that the same is true of how the Committee thinks about conference tournament losses. Suppose a conference has an eight-team tournament. Then there are four losing quarter-finalists, who occupy tournament positions 5 through 8. So, I add 5 + 6 + 7 + 8 to get 26 and divide by the number of losing quarter-finalists - 4 - to get 6.5 as the assigned conference tournament standing for each losing quarter-finalist. For the losing semi-finalists, their standing is 3.5. The losing finalist is 2 and the tournament champion is 1. This seems like a reasonable and most likely accurate way to score conference tournament results, and I suspect this is more or less how the Committee thinks about it.

    This leaves the question whether the Committee considers the regular season standings and conference season tournament results separately or as a single factor. Although the Committee could do either, my guess is that the Committee considers them together. The way I do this is to add a team's regular season standing position to its conference tournament finishing positions and divide by two.

    Thus, using Wisconsin in 2015 as an example, it finished tied for first with Penn State for the conference regular season, so for the regular season its finishing position was 1.5. It lost in the quarter-finals of the conference tournament, so its tournament finishing position was 6.5. Adding these two together and dividing by two, I assign it a combined conference standing position of 4.0. But, I also know that Wisconsin played a relatively easy conference schedule -- it did not play Rutgers and it did play the bottom three teams in the conference standings. Thus even its combined standing position of 4.0 could be a little high.

    There's an interesting question of why conferences choose to play conference tournaments at all, especially the power conferences. For most power conference teams, playing in a conference tournament will not help in the Committee's decision-making process. For the tournament champion, however, and also possibly for the second place team, the tournament results may help it in the Committee's decision-making. Thus conferences that are oriented, in scheduling, towards the NCAA Tournament, they need to make a decision on which is more important -- well positioning its conference champion and possibly runner up, for the NCAA Tournament, or avoiding damage to its other teams in the NCAA Tournament decision-making process. (When thought about in this context, the ACC's move to a four-team conference tournament makes sense. It helps the conference tournament winner in relation to the NCAA Tournament, and possibly the runner up, but it reduces the opportunity for damage to other ACC teams.)
     
    onegamesuspension and Gilmoy repped this.
  4. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #4 cpthomas, Dec 29, 2015
    Last edited: Dec 29, 2015
    I'm taking a break from the tedium of work on my "standards" system to illustrate how the different "factors" or "criteria" the Women's Soccer Committee uses compare to the Committee's actual decisions over the 2007 through 2015 period. Here's a table that is a good resource, followed by an explanation:

    [​IMG]

    This table shows factor averages for each of the Committee's decision levels. Using the ARPI factor to illustrate, the average ARPI for 1 Seeds has been 0.6842. As you step down that ARPI column, you can see the average ARPI for each decision level. The "35-60 At Large" row is the average ARPI for teams with ARPI ranks in the 35 to 60 range to which the Committee awarded at large selections. I chose 35 as the low end of the range because over the study period every team with an ARPI rank of 34 or better received an at large selection. I chose 60 as the upper end because over the study period no team with an ARPI rank of 60 or poorer has received an at large selection. In fact, the top end for at large selections has been 57, but I use 60 in my calculations to allow for future Committee wiggle room. The 35 to 60 group thus represents a quite generous "bubble" from which the Committee has made its final at large selections over the last decade. The "35-60 Not At Large" row is the average ARPI for teams in the 35 to 60 range that did not receive at large selections.

    The "Top 60 Results Score," "Head to Head v Top 50," "Common Opponents with Top 50," and "Poor Results" columns all use scoring systems I've developed for each of the "Results Against Teams Already Selected," "Head to Head Results," "Results Against Common Opponents," and "Results Over the Last Eight Games" factors. The scoring systems are based on my best interpretation of what the factors mean, how I think the Committee applies them, and what's practical from a computer programming perspective.

    The "Conference Standing" column is based on assigning team standing by averaging the team's regular season and conference tournament finishing positions.

    As you can see, with two exceptions the averages in the columns stair-step down in a way consistent with expectations about the Committee's decisions. They suggest that the Committee, at least from a high oversight perspective, is making decisions consistent with the factors. The two exceptions are conference standing and conference rank, which is not surprising. Each of those factors is not intended to be looked at in isolation, rather they need to be looked at in combination and at this point, at least, I do not have a system for scoring them in combination.

    I've highlighted in yellow what stands out to me in this table. If you compare the "At Large" and "Not At Large" numbers for each factor, you can get a sense of how easy or not easy it is to distinguish -- at least on average -- the "Yes" and "No" decisions the Committee has made based on that factor. What stands about about the yellow highlighted "Top 60 Results Score" and "Top 60 Results Rank" numbers is that the numbers are starkly different for the at large selections and the at large "rejections." For the "Top 60 Results Score" numbers, this may in part be due to my scoring system for those results, which rewards results against highly ranked teams very highly as compared to results against less highly ranked teams (the scoring system is more geometric than arithmetic). But for the "Top 60 Results Rank" column, the teams simply are ranked 1 to 60 so the "scale" is the same as for the ARPI and ANCRPI rank columns. What the table thus shows is that the Top 60 Results system actually does a much better job of correlating with the Committee's at large decisions than any of the other factors. This is consistent with what I've thought I'd been seeing over the past years -- that teams' Top 60 Results are a key factor in the Committee's at large decisions -- and appears to confirm it quite clearly.

    I've suggested elsewhere, however, that in 2015 the Committee appears to have downgraded the importance of Top 60 Results. Here's another table that gets at that possibility, followed by an explanation:

    [​IMG]

    This table shows the ARPI ranks and Top 50 Results ranks, for teams not receiving at large selections, for the 2007 through 2015 seasons. It's sorted in the order of teams' Top 60 Results ranks, from the best to the poorest. The entire table is much longer, but I've cut it off at the team with a Top 60 Results rank of 25 since that's all I need to make my point. In the left hand column, the red color coding means the team did not receive an at large selection because its winning percentage was less than 50% -- for purposes of this post, that team is irrelevant. The black coding means that team was an at large selection candidate but did not receive a selection. The empty space coding means the team was an automatic qualifier -- also making the team irrelevant for purposes of this post.

    As you can see, after excluding the irrelevant teams, the two best teams not to receive at large selections, in terms of Top 50 Results, over the 2007 through 2015 seasons had Top 50 Results ranks of 14 and 15. In what years and to whom did those "no at large selection" decisions apply? This year -- 2015 -- to Illinois and Michigan respectively. Furthermore, this is not a matter of the Committee simply not having faced any teams in the past with such good Top 50 Results ranks but ARPI ranks of 46 (Michigan) and 53 (Illinois). Indeed, over the nine-year study period, there have been 20 teams with poorer Top 50 Results ranks than Michigan and with ARPI ranks either equal to or poorer than Michigan's, but that received at large selections. And, there have been two on the poorer side of Illinois.

    I believe the numbers show that the Committee indeed downgraded the importance of Top 50 Results in making its 2015 at large selection decisions: Washington at 37 in Top 50 Results rank (47 ARPI), Long Beach State at 39 (43), UNC Wilmington at 40 (42), and Loyola Marymount at 54 (41) all received at large selections rather than Michigan at 15 (46) and Illinois at 14 (53). Perhaps Illinois is arguable due to its 53 ARPI rank, but Michigan isn't. Whether the Committee did this downgrade knowingly and intentionally, I don't know. Whether this represents a new path intended by the Committee, I don't know. What the numbers show, however, is that it's a break with the Committee's decision pattern over the last decade.

    There's nothing inherently wrong with the Committee making a change in what it emphasizes, from an NCAA perspective, so long as its decisions are guided by the factors. The Committee can weight each factor however it wants. That is a reality of bracketology. Nevertheless, it seems like good judgment would have the Committee advise the schools if it is intending to make a change in emphasis, or at least if it has made a change, so that the schools know where the goal posts are for NCAA tournament at large selection (and seeding) purposes.
     
  5. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I've wondered about what information the NCAA staff gives, and on request can give, to the Committee to use in its decision-making process. We know that the Committee has a lot of detailed information about the current season, as a lot of what the Committee has is available at the NCAA's RPI Archive. The information in the Archive is by year. (Coincidentally, it goes back to 2007, which is the same year that my data go back to.)

    We don't know if the Committee has access to the detailed information organized in a different format than the Archive's reports for each year. If I were a Committee member, I'd want to see at least some of the information organized by factor over time. I'll use the ARPI and ANCRPI ratings and ranks to illustrate. Over the 9 years from 2007 through 2015:

    At Large In:

    Every team with a rating or rank equal to or better than the following received an at large selection. These numbers are based on the current RPI formula, as applied retroactively to those seasons that used a different formula, which is what I would like to see if I were a Committee member:​

    ARPI: .5987

    ARPI Rank: 34

    ANCRPI: .6599

    ANCRPI Rank: 11​

    At Large Out:

    Every team with a rating or rank equal to or poorer than the following did not receive an at large selection:

    ARPI Rank: 58​

    ARPI: .5704

    ANCRPI Rank: 129

    ANCRPI: .5089​

    If I'm a Committee member and have access to this information, I have some pretty good indicators of the teams that should be in the at large "bubble." They're the teams with ARPI ranks between 35 and 57 and ratings less than .5987 but more than .5704; and with ANCRPI ranks between 11 and 129 and ratings less than .6599 but more than .5089. As you can see, this doesn't make the ANCRPI numbers very helpful in defining the bubble as too many teams are included. The ARPI numbers, however, narrow the group quite well: the ARPI rank numbers limit the group to 23 teams; and the rating numbers limit the group more, at least for 2015, where they limit the group to 16 teams.

    These numbers and ways of organizing them are just examples. There are various other numbers, combinations of numbers, and ways of organizing them that the Committee could find useful. Whether the Committee receives such numbers, organized in useful ways, or at least has the ability to receive them from the NCAA staff on request, I don't know.

    Here's an example of how a pair of numbers -- conference rank by average ARPI and conference average ARPI -- could be very relevant to the Committee's decisions and, indeed, could have been particularly relevant in 2015. As a review of the RPI Archive reports will show, these are numbers that are available to the Committee, in each year's Archive reports. In the Archive, however, the numbers are not organized in the potentially important way I'm about to show.

    For 2007 through 2015, I identified the conference from each year ranked #1 on that year's conference average ARPI list as well as the average ARPI for that conference. That's 9 conferences ranked #1. I then arranged them in order. Here's what I got:

    1. .6195 2011 ACC
    2. .6155 2009 Pac 12
    3. .6052 2008 ACC
    4. .6050 2012 ACC
    5. .6031 2010 ACC
    6. .5971 2007 ACC
    7. .5964 2014 Pac 12
    8. .5930 2013 ACC
    9. .5875 2015 ACC
    If I'm a Committee member, making 2015 NCAA Tournament at large selections, I'm interested to see that the #1 conference (ACC) has the poorest average ARPI of any #1 conference over the last 9 years. And, I'm interested in seeing similar lists for the #2, 3, 4, etc. conferences. I'm wondering if these lists will reveal any important information for this year's decision-making process. If I can put together, or the staff can give me, these additional lists, here's what I'll see in terms of where the 2015 conferences fit from an historic perspective:

    #2 ranked conference: .5809 Big 10 Ranked #9 (weakest) of 9 years
    #3 ranked conference: .5713 SEC Ranked #7 of 9
    #4 ranked conference: .5704 Pac 12 Ranked #4 of 9
    #5 ranked conference: .5655 American Ranked #3 of 9
    #6 ranked conference: .5606 Big 12 Ranked #3 of 9
    #7 ranked conference: .5567 Colonial Ranked #1 of 9
    #8 ranked conference: .5380 Big West Ranked #5 of 9
    #9 ranked conference: .5322 Big East Ranked #4 of 9
    #10 ranked conference: .5300 West Coast Ranked #2 of 9

    What these numbers suggest is that in 2015 conference strength, for the top third of conferences, was quite compressed. The top-rated conferences were weaker than usual and the lower-rated conferences were stronger than usual.

    So, as a Committee member, I now want to see how, for each year, the rating differences between the top-rated and lower-rated conferences compare. Here's what I come up with for the rating difference between the #1 and #7 rated conferences:

    2015 .0380
    2013 .0444
    2010 .0521
    2007 .0538
    2012 .0559
    2008 .0585
    2014 .0671
    2009 .0676
    2011 .0786
    And, here's what I come up with for the difference between the #1 and #10 rated conferences:

    2015 .0545
    2007 .0693
    2008 .0751
    2013 .0782
    2014 .0806
    2010 .0808
    2012 .0893
    2009 .0933
    2011 .1084
    I say to myself, "Wow!" The differences in conference strength were much smaller this year than in the past. But, I want to give them some more context, so I decide to look at what I now consider to be the five "power" conferences: ACC, Pac 12, Big 10, SEC, and Big 12. (I leave off the Big East and American because I think their split up a couple of years will confuse things and I also regard them as a level short of the five power conferences.) For each year, what is the rating difference between the highest rated of the five power conferences and the poorest rated?

    2015 .0269
    2014 .0327
    2013 .0340
    2007 .0346
    2012 .0348
    2010 .0521
    2008 .0540
    2009 .0676
    2011 .0726
    Now, I look at the rating difference between the #1 (ACC) and 7 (Colonial) conferences in 2015 and see it was .0308. That difference would be #2 on the above list. In other words, in 2015 the Colonial was closer in strength to the top ranked conference than the weakest of the five power conferences in every year prior to this one. Or, to put it differently, in 2015 the Colonial had the strength we are used to seeing from power conferences. And, I look at the difference for the #10 conference (West Coast) and see it was .0545. For two of the years, this would put it too at the level of power conferences.

    As a Committee member, if I have this information, I have to decide what to make of it. But, for at least this year, it's looking to me like being a team from the #1 or #2 conference, or from a power conference, does not mean as much as it has in prior years. And, being from the #7 or #10 conference means more than it has in prior years. So, when I'm comparing teams from different conferences and looking at where they finished in their conference standings, among the teams from Top 10 conferences, I may place more emphasis on conference finish position and less emphasis on conference rank than others placed in prior years.

    And, if I'm going to be on the Committee again next year, I'm wondering if 2015 is an isolated case or if it is an indicator of a significant change in the conference parity world.

    Of course, this kind of thinking only applies if the Committee has this kind of information in front of it, organized in a way similar to the way I've arranged it above.​
     
  6. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    #6 cpthomas, Jan 9, 2016
    Last edited: Jan 9, 2016
    As I've identified factor-related "standards" that all of the Committee's decisions have been consistent with over the last nine years, I've looked at each factor by itself, but also at the factors in pairs. When I first looked at paired factors, I looked at teams' ARPI ranks paired with each of the other factors. I did not, however, look at teams' ANCRPI ranks paired with each other factor. (Identifying standards for paired factors is a very pains-taking, time-consuming, and tedious process.) I've now finished adding standards for ANCRPI ranks paired with the other factors.

    The way paired factors work is based on how I think a Committee member's mind might work: "Team A does well on factor X and does well also on factor Y. Team B does equally well on factor X, but not as well on factor Y. So I will give a decision-making preference to Team A." Or, conversely, "Team A does poorly on factor X and also not very well on factor Y. Team B does equally poorly on factor X, but does better on factor Y, so I will give a decision-making preference to Team B." In other words, the Committee member is looking at paired factors to distinguish between teams. This is part of the essential Committee need to distinguish among teams.

    To illustrate in real life case how this works, I'm again going to use the example of Michigan's not getting an at large selection this year. As I've posted about previously, Michigan had outstanding results against the Top 50 teams, putting it in the #15 rank position for the "results against teams already selected" factor under my scoring system. Based on the Committee's decisions from 2007 through 2014, this would have given it an at large selection. It didn't get one and I opined that this seemed like a departure from past Committee practice and was possibly due to the Committee engaging in conference or region balancing. In my opinion, this would have been improper, as I am not aware of any NCAA-established factor that allows conference or region balancing.

    As I've identified standards for ANCRPI rank paired with other factors, however, I found another possible explanation for Michigan not getting an at large selection. When I identified standards for ANCRPI rank paired with teams' standings within their conferences (with standings being the average of their regular season and conference tournament finishing positions), here's a sequence of standards such that, if a team met one of them, it always has been denied an at large position:

    ANCRPI Rank >= 82 (meaning, poorer than or equal to 82), and Conference Standing >=4.00 (meaning, poorer than or equal to 4.00)

    ANCRPI Rank >=102; and Conference Standing >=2.50

    ANCRPI Rank >=113; and Conference Standing >=2.25
    Note that these standards are irrespective of conference strength. Teams with these or poorer ANCRPI rankings, with these or poorer conference standings, never have received at large selections.

    So, where does Michigan fit in relation to these standards? Its ANCRPI Rank is 118 and its Conference Standing is 4.25. Thus it failed in relation to these standards, and it failed by a good distance.

    This casts the Committee's 2015 denial of an at large position for Michigan in a different light. Yes, Michigan had outstanding results against the Top 50, results that said it should get an at large selection. But also, Yes, Michigan had very poor non-conference results, especially when considered together with its Conference Standing, with the combined results saying it should not get an at large selection. Michigan's record thus had some big internal inconsistencies. It put the Committee in a situation it hadn't faced at least over the previous eight years. So, the Committee had to make a choice: go with Top 50 results or go with non-conference results and conference standing. A choice to go with non-conference results paired with conference standing would have denied Michigan an at large position.

    Of course, whether the Committee went through this kind of reasoning, I don't know. But if it did, there is nothing in the factors that would have prevented the Committee from allowing Michigan's non-conference results and conference standing to override its "results against teams already selected."

    The point of this post is not really to analyze the Committee's 2015 decision as to Michigan. The point is to illustrate how the Committee can look at factors if it wants to and to show that the process of evaluating teams based on the factors can be complicated and can require each Committee member to decide which factor or factors, either alone or in combination, he or she wants to emphasize.
     
  7. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I've completed an upgrade of the "standards" system I've developed and now comes the fun part, seeing the results and reporting on them. In this and my next post (which will come in a few days), I'm going to report on two things:

    1. This post: How the Committee's seeding and at large selection decisions over the 2007 through 2015 seasons match up with the standards. This mostly will show whether the changes the Committee made in the RPI over that period have had an identifiable effect.

    2. Next post: Which standards appear to be the most "powerful" in the Committee's decision-making, meaning which standards match up the most with the Committee's decisions. From a bracketology perspective, this is the most interesting project I've worked on.
    As a reminder, each standard is based on one or a pair of two of the factors the Committee is obligated to follow in making its at large selections. For each NCAA factor, I assign a value to a team. For a factor such as the Adjusted RPI, it's easy, I simply assign the team its Adjusted RPI value. For other factors, on the other hand, I need to have a score to assign to the team and, in some cases, have had to develop my own scoring system. For example, for the factor "Results Against Teams Already Selected (and Having an Adjusted RPI Rank of 75 or Better)," I use a surrogate which is Results Against Top 50 ARPI Teams and I have a weighted scoring system I use to determine a score to assign to each Top 60 team for that factor.

    What I am looking for is "yes" and "no" standards, for at large selections, and each level of seeds (1 through 4). If a team meets a "yes" standard, it means every team meeting that standard always has gotten a positive decision from the Committee. Similarly, if a team meets a "no" standard, it means every team meeting that standard always has gotten a negative decision from the Committee. This is easy for single factor standards. For two factor paired standards, it is more difficult. To meet a "yes" or "no" standard that pairs two factors, the team must meet a standard for each factor. For example, I have a standard that pairs teams' Adjusted RPIs and their Results Against Top 50 Teams. To meet a "yes" standard, the teams' ARPI must be equal to or better than a particular ARPI value and at the same time its Results Against Top 50 Teams score must be equal to or better than a particular Results Against Top 50 Teams score.

    Altogether, I have roughly 50 either individual factors or paired factors that I look at. For each set of paired factors, there can be a number of standards associated with the pair. The end result is that the total number of standards my system now uses is nearly 1,400.

    The bottom line of all this is that the standards are numerical values based on the NCAA's mandatory factors; and the entire set of standards is consistent with every decision the Committee has made from 2007 through 2015.

    With that in mind, here's a table that shows what happens when I apply the standards to the data over the 2007 through 2015 period, in terms of how the standards "match" the Committee's decisions. Following the table, I'll give an explanation of what you're looking at:

    [​IMG]

    The table is pretty simple. If you look at the Standards Effect column, you'll see the first data row is "1 Seed Yes." If you look at the second column, you'll see it's for the 2007 bracket formation. The number in the 1 Seed Yes/2007 box is 3. The way the standards work, a team has only three possible results: it will meet one or more "yes" standards and no "no" standards; it will meet no "yes" standards and one or more "no" standards; or it will meet no "yes" and no "no" standards.

    What the number 3 in the 1 Seed Yes/2007 box means is that when I apply the standards to the 2007 data, there are three teams that end up meeting one or more "yes" standards for a #1 seed (and thus no "no" standards).

    The second data row is "1 Seed?" In the 1 Seed?/2007 box is the number 1. What this means is that there is one team that ends up meeting no "yes" and no "no" standards. This makes it a potential #1 seed.

    Although the table doesn't show it, since only four teams show up in the 1 Seed Yes and 1 Seed? boxes, then every other team meets at least one "no" standard for a #1 seed.

    Thus, with three teams being in the 1 Seed Yes box and one team in the 1 Seed? box, and with all other teams meeting at least one "no" standard for a #1 seed, for 2007 the standards have identified all four of the #1 seeds. This is represented by the 4 in the "1 Seed Decided" box.

    If you go across the 1 Seed Decided box, year after year, what you'll see is that for each year from 2007 through 2015, the standards have identified all four of that year's #1 seeds. At the far right of the table, the last three columns are "Total Decided," "Total Possible," and "% Decided." For the #1 seeds, the Total Decided column shows the total number of #1 seeds that the standards have identified, in this case 36 (four per year, for each of 9 years). The Total Possible" column shows how many seeds (or at large selections, when you get farther down on the table) there were over that 2007 through 2015 period. And, the % Decided column shows the percent the standards identified, of the total possible. Thus for #1 seeds, the standards identify 100% or all of the possible #1 seeds; for #2 seeds, the standards identify 91.7%; and for the #3 and #4 seeds, they identify 75%. Similarly, for at large selections, the standards identify 96.4%. One way to think about this is that the areas where the Committee has to make its hard decisions are those where the standards can't identify a decision. In those areas, the Committee must compare teams that meet no "yes" and no "no" standards, typically a very small group of teams.

    One other row is of particular interest, the bottom row. This row shows the numbers of seeds -- out of 16 each year -- that the standards are able to identify.

    Apart from what the table shows about the standards' ability to identify seeds and at large selections, there is something else about the table that I find interesting:

    Over the 2007 through 2015 period, the Committee has made sequential changes in the bonus/penalty part of the Adjusted RPI formula. When I set up my updated version of the standards system, I wanted it to be useful in relation to the formula the Committee currently uses. So, I calculated, and used, ARPI and Adjusted Non-Conference RPI ratings for all the years in that period using the current formula. One of the things I was interested to see was if there would be differences in how well the standards performed early in the period as compared to how well they performed late in the period. Specifically, given that early in the period the Committee was using a "different" rating system than it currently uses, and since the standards are based in part on the rating system the Committee currently uses, one would wonder about whether the standards would perform more poorly when applied to the early period.

    When I look at the table, however, the standards appear to perform the same over the course of the 2007 through 2015 period. This suggests to me that the changes the Committee has made to the Adjusted RPI have been inconsequential. Essentially, they appear to have been only window dressing.

    On reflection, this is not surprising. Simply put, the bonus/penalty structure is not a powerful part of the RPI structure. In my opinion, this is good. Fundamentally, any mathematical rating system such as the RPI (and the same is true for other systems) is fairly crude. Anyone who imagines that by attaching little do-dads to the system, he or she will significantly improve the system, simply doesn't understand the limitations of such systems. Yes, they can be used to make some crude judgments -- for example, which likely are the Top 60 teams and which likely are the top 30 to 35 teams out of that 60. But, beyond that, there's a need to resort to other factors to parse out the best of the remaining teams in the field of 60.

    Or, if I want to be less generous, the Committee's work on revising the bonus/penalty formula has been a waste of time, has created false expectations (both from those "for" and from those "against" the revisions), and has not really changed anything. When it gets to the hard decisions the Committee has to make, other factors than the RPI drive those decisions. I'll write more about that in my next post, after I've done some more work, but the above table seems to suggest that the formula changes have not really affected anything.
     
  8. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    This post will address the question: Which factors (and the standards that match up with them) appear to be the most "powerful" in the Committee's decision-making, meaning which factors match up the most with the Committee's decisions?

    As a beginning point, I'll start with the ARPI considered as a factor by itself. Over the 2007 through 2015 period, all teams ranked 34 or better by the ARPI, that were not already Automatic Qualifiers, received at large selections. The standard for this is:

    At Large Yes: ARPI Rank <=34
    In addition, no teams ranked 58 or poorer by the ARPI received at large selections. The standard for this is:

    At Large No: ARPI Rank >=58
    This makes ARPI Rank seem like a really important factor. It is, but so far as these two standards are concerned, what it really means is that ARPI Rank, by itself, is capable of taking care of the easy at large decisions.

    Based on the above, the total pool of potential at large selections is teams ranked 1 to 57 by the ARPI. On average from 2007 through 2015, of the top 57 teams, 15.1 were Automatic Qualifiers. Thus the real pool of potential at large selections isn't 57 teams, but rather, on average, is 41.9 teams. From these teams, the Committee needed to make 34 selections from 2007 through 2012 and 33 selections from 2013 to 2015.

    Of the teams ranked 1 through 34, on average 10.2 have been Automatic Qualifiers. Thus with the ARPI itself identifying the top 34 teams as always getting at large selections, if not Automatic Qualifiers, the ARPI actually, by itself, can identify 23.8 at large selections, on average, each year. This left 10.2 additional teams to get at large selections from 2007 through 2012 and 9.2 from 2013 through 2015.

    In the ARPI Rank 35 to 57 range -- bubble territory from an ARPI Rank perspective -- on average there are 4.9 Automatic Qualifiers each year. Since there are 23 teams in the range, this means that the actual bubble is 18.1 teams, on average. The question is, which factors are the most "powerful" in identifying the teams to get at large selections from this group.

    Here is a table that provides information in response to this question. An explanation follows the table:

    [​IMG]

    Each row of the Factors column identifies either an individual NCAA factor (or my surrogate for a factor) or two paired factors. The "In" column states how many teams, from the bubble group, the individual factor or the paired factors identified as "yes" for getting an at large selection on average each year. The "Out" column states how many teams, from the bubble group, the individual factor or paired factors identified as "no." This particular table is sorted in order of the "In" column to show which factors are best at identifying which teams will get at large selections.

    The table shows that teams' ARPI Ranks paired with their Top 50 Results Ranks are the best at identifying which bubble teams will get at large selections. That pairing, by itself, was able to identify 5.8 out of the 10.2 teams to be selected from the bubble in 2007 through 2012 and 5.8 out of the 9.3 teams to be selected from 2013 through 2015. What this suggests is that, whatever the Committee members may think they're basing their decisions on, what they are most influenced by is the combination of teams' ARPI ranks and their results against Top 50 teams.

    It may be helpful here to provide another table, which shows how the factor pairs work. The table is for the ARPI Rank/Top 50 Results Rank pairing:

    [​IMG]

    Since I'm posting about At Large selections, look at the bottom two rows. Using the At Large Yes Standard row as an example, the row says that:

    A team with an ARPI Rank of 32 or better and a Top 50 Results Rank of 52.5 or better always has received an at large selection.

    Moving right along that row, a team with an ARPI Rank of 33 or better and a Top 50 Results Rank of 50.0 or better always has received an at large selection.

    And so on across the row.
    Similarly, using the At Large No Standard row as an example, the row says that:

    A team with an ARPI Rank of 35 or poorer and a Top 50 Results Rank of 57.0 or poorer never has received an at large selection.

    And so on across the row.
    So, when I say that this pairing of factors has been able to identify 5.8 at large selections per year, it means that the standards set out in the above table in the At Large Yes Standard Row, as a group, have identified 5.8 at large selections per year.

    You can run down the table of Factors and see the order in which the Factors or paired Factors fall in identifying teams for at large selections.

    Here is the same Factors table, but this time sorted in order of At Large Out selections:

    [​IMG]

    Once again, the ARPI and Top 50 Results factors are the most powerful, this time at excluding teams from at large selections. However, the L8G Factor also is powerful here when paired with Top 50 Results. For the L8G Factor, I use a surrogate that scores teams based on poor losses and ties. You might notice that the L8G/Top 50 Results Rank paired factor doesn't help with getting a team an at large selection, but it can keep a team from getting one. This makes sense: poor results can't help you, but they can hurt you. In addition, the ANCRPI paired with Top 50 Results can hurt you, significantly more than it can help. What this suggests is that the Committee is persuaded more to exclude teams due to poor non-conference results than it is to include them due to good non-conference results.

    Overall, the two factor tables, I believe, give a good indication of the relative persuasive values to the Committee of the different factors and paired factors, whether the Committee is aware of it or not. For scheduling, among other things, parts of these tables point to what schools hoping for at large selections should be thinking about. In particular, the table suggests, schools should be thinking about the need for good results against Top 50 teams: If your team is in the ARPI range to be considered for an at large selection, the accumulation of good results against Top 50 opponents is the most powerful road to getting an at large selection and the lack of good results against Top 50 opponents is the most powerful road to not getting an at large selection.

    I'll end with two additional table, simply as a resource. They are like the two Factors tables above, one sorted by "In" and one sorted by "Out," but are for all teams ranked #1 through #57, that are not Automatic Qualifiers, rather than just for the bubble teams.

    [​IMG]

    [​IMG]
     
  9. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    To encourage the growth in numbers of those who have an in depth understanding of the RPI and of the Women’s Soccer Committee’s work in forming the NCAA Tournament bracket (including at large selections and seeds), I’ve set up a new blog “RPI and Bracketology for DI Women’s Soccer Blogspace” as a “sister” to the “RPI for Division I Women’s Soccer” website. I hope those of you who have a serious interest in the RPI and bracketology will visit the blog, put it on your list of blogs to keep up with, and help get and keep it going. I’m thinking of it as a more technical and serious space than what we have here on BigSoccer. I hope it will be of particular interest to we few RPI “geeks” who have frequented BS, to college coaching staffs, and to college players and parents who want to know what the RPI is all about and how the Women’s Soccer Committee makes its decisions.

    And, along with that …

    I’ve just completed an update of all the pages of the RPI for Division I Women’s Soccer website. The website covers two basic subjects: (1) the RPI and how it works and (2) the NCAA Tournament at large selection, seeding, and bracket formation process, including the relationship between scheduling, the RPI, and the NCAA Tournament. It’s for fans, for college team coaching staffs, and for players and their parents.

    All information at the website now is current through the end of the 2015 season. In addition, I’ve re-written parts of many of the pages and all of some of them, making them more clear and concise and eliminating excess material that wouldn’t interest most readers. Among the pages I’ve re-written is the “RPI: Modified RPI?” page, on which I now have comparisons of a number of rating systems including the different RPI systems the Women’s Soccer Committee has used over the last nine years, several alternative rating systems I have developed, and three Elo-based systems. And, on the “NCAA Tournament: Scheduling Towards the Tournament” page, I’ve included a new tool for determining the likely outcomes of a team’s future games.

    The website now is a “finished” product, except for future annual data updates. I hope you’ll visit it and also let others interested in Division I women’s soccer know about it.
     
    go T and orange crusader repped this.
  10. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I've just started what will be a series of posts at the new blog “RPI and Bracketology for DI Women’s Soccer Blogspace," mentioned in the preceding post. The posts are about a simulation I've done for the entire 2016 season as well as the 2016 NCAA Tournament automatic qualifiers, at large selections, and seeds. If you want to check it out, the posts are in inverse chronological order, so for the simulation I suggest you scroll down to the second post from the top, which explains what the simulation is and how I did it.

    Feel free to add your comments, ask questions, etc., at the Blog. That's where I'll be posting most of my work related to the RPI and the NCAA Tournament bracket this year. You can subscribe to it, if you want. I'll be using it more than BigSoccer, for those subjects, because it's much easier for me to construct posts and insert tables and charts there, as compared to here on BS.
     
    MiLLeNNiuM repped this.
  11. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    Following the completion of the 2016 season, as I have to do each year, I've completed a review of the season's data in relation to teams that did and did not get #1, #2, #3, and #4 seeds and at large selections for the NCAA Tournament. I do this in relation to the patterns (standards) I've identified as consistent with all of these decisions by the Women's Soccer Committee over previous years, to update the patterns each year so that they are consistent not only with prior years' Committee decisions but also with the Committee's decisions for the year just completed. This year, I've also made some major changes in my method for identifying the patterns.

    I've now made revisions to the RPI for Division I Women's Soccer website to take into account the results of this year's patterns update. I've also posted an introductory article on the work at my RPI and Bracketology blog site, with links to the appropriate website pages. For those interested, I suggest you start at the blog site and go from there. Here's a link: http://rpiford1wsoccer.blogspot.com/2017/01/ncaa-tournament-bracket-simulations.html. From there, use the links in the article to get to the website pages that have tables showing the patterns the Committee's decisions have followed. I'm very happy with these tables, their contents are pure gold in terms of what the Committee has done historically.

    In addition, I've posted at the blog site a detailed article on the relationship between teams' conference standings and conference ARPI ranks and the decisions the Committee has made on seeds and at large selections over the last 10 years. I think the article provides some good insights into what a team can and can't rely on from the Committee, in relation to where the team finishes in its conference regular season standings and tournament. The Committee's decisions this year (DePaul) and last (Wisconsin), as well as in some earlier years, have been controversial. This article shows, among other things, where those decisions fit within the Committee's longer-term decision-making process. You can find this article here: http://rpiford1wsoccer.blogspot.com/2017/01/ncaa-tournament-bracket-formation-role.html.

    Have fun!
     
  12. Holmes12

    Holmes12 Member

    May 15, 2016
    Club:
    Manchester City FC
    #12 Holmes12, Jan 17, 2017
    Last edited: Jan 17, 2017
    cp, doesn't travel "clustering" play the most significant factor, by far, in choosing at-larges? It's good to be near Virginia/Carolinas, Florida and California. Like the CAA, for example. Something like the Horizon champ usually earns a regional Big Ten at-large to host them. Patriot/Ivy usually pair with BC or Penn State, etc. I remember seeing an interview with Ivy champ Harvard HC on selection day and he said they already knew they'd get either BC or Rutgers (I think that was the other one). It'd be nice to be a spring sport like lacrosse where most of it is post-exams so they can send teams around and invite at-larges from the continental interior. For the growth, they're going to have to figure out a way.
     
  13. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    No, travel clustering doesn't play any role at all in the at large selections. Seriously, the selections over the last 10 years all have been based in the criteria. Yes, there have been some controversial decisions -- though relatively few and, typically, most controversial for the fans whose teams didn't get selected -- but I've never seen any evidence that travel clustering played a role in the selections nor can I remember anyone ever having claimed that.

    Once the teams are selected, however, and the #1, 2, 3, and 4 seed have been identified from the 64 teams in the bracket, then travel clustering definitely occurs. In fact, since women's soccer is a "non-revenue sport," meaning that the NCAA Tournament doesn't make a profit, after the placement of seeded teams in the bracket, travel clustering is the driving force for bracket placement. The NCAA's rules actually require it. This is why you get situations like Harvard/Boston College or Northeastern/Boston College or possibly Rutgers instead of BC as a regular event; why you get the Horizon Champ playing a nearby Big 10 school; and so on. This especially happens because by rule, a team from a conference can't play an opponent from its own conference in the first two rounds. So, the mid-majors with only one school in the tournament end up playing opponents from top conferences. (It's also possible that once the seeds are selected, the #3 and #4 seeds are placed in appropriate seed positions but in portions of the bracket that are travel-cost conscious. I don't think this happens with the #1 and #2 seeds, although it could happen occasionally with some of the #2s.)
     
  14. cpthomas

    cpthomas BigSoccer Supporter

    Portland Thorns
    United States
    Jan 10, 2008
    Portland, Oregon
    Nat'l Team:
    United States
    I just have posted two new articles at the RPI and Bracketology for DI Women's Soccer Blogspace, on the topic of which factors are most important in the Women's Soccer Committee's seeding and at large selections decision-making for the NCAA Tournament. The articles are dated January 19 and January 20, 2017. The January 19 article has some introductory information and then addresses the #1 seeds. The January 20 article addresses the at large selections. Subsequent articles will address the #2, #3, and #4 seeds. For those of you interested in these topics, I suggest you start with the January 19 article and go from there. (The Blog posts the articles in reverse chronological order.)
     

Share This Page