Discussion in 'Women's College' started by cpthomas, Oct 8, 2012.
Harsh second round for Santa Clara at Stanford. Every year. Aren't the Broncos a top ten RPI?
They did. Play at Denver in the first round.
Why is Baylor going on the road?
If a nonseeded team wins over a seeded team who would host the second round?
I'd guess it's because Baylor is unable to host for whatever reason. Some schools don't host when there's a home football game, and for others it's a facilities issue. The regional the following weekend is decided separately from the opening round, so Baylor could host that without hosting the first round game.
EDIT: It looks like Baylor feeds into the same bracket as North Carolina. If Baylor wins, that's where they're going for the second weekend.
Why is my reply part of the quote ??? More weird.
You forgot a [ at the end of the first part, so it reads
/quote]. ( no left bracket )
I think you are right and I thought this happened to FSU last year.
Highest RPI. ( if they applied to host and the venue is accepted)
Baylor is the SEEDED team so you know they have the better RPI. Baylor is at 9 and Arizona St is at 39. This is a mystery. Maybe Baylor didn't put in the request to host? That happened to Dayton last year.
Colgate is hosting Rutgers, another anomaly as Rutgers has a 47 RPI and Colgate is 88. But there is a reason for this. The NCAA supposedly has a crazy rule that teams from NJ cannot host Tournament games. You cannot make this stuff up. Like the crazy rule that teams with American Indian mascots cannot host either.
Poor Santa Clara. Have an 11 RPI and they will get Stanford in the 2nd round.
Good games in the first round:
31 Georgetown at 30 VA Tech
27 Cal vs. 24 Pepperdine
And VERY pleased to see the selection committee skip over Dayton and Dartmouth. Neither team had any quality wins. Dartmouth looked good with a 37 RPI, but how they got that high an RPI is not understood as they had no wins in the top 50. I give praise to the committee for selecting teams like Rutgers, Auburn, and Miami - slightly lower in RPI but better quality results. This should be noted by any coach doing future scheduling. Well done selection committee.
A dinnertime discussion.
My Guess - VT and Cal.
The people (or committee) that create and orchestrate this bracket, are freaking jackasses.
Thanks for adding to the discussion.
Got anything else?
I need someone to help me understand the venues for the second and third rounds on the 16th and 18th.
Will both games be played at the home of the highest seed or highest seed winner? Or will the teams travel between games? (I trust not!)
For example, if by some chance Idaho State upsets Stanford, and Radford gets by UNC, would those consequent 2nd and 3rd round games be played at Idaho State and in the mountains of Virginia?
Or are the sites already set regardless?
All three games that weekend in each pod will be played at the site of the 1 or 2 seed in each pod. If that team looses the first weekend, then the other seeded team (3 or 4) In the pod hosts. Seeded teams always get priority in hosting. My understanding is that the days of making a higher seed travel are over ( it was part of the sales pitch for the new method) That hasn't been tested, however.
If they also lose the first weekend, the highest RPI team left hosts. Unseeded teams aren't guaranteed hosting, so travel restrictions may apply and a higher RPI team may have to travel .
This assumes, of course, that the team applied to host and has a suitable home venue for tournament competition. That hasn't always been the case (ie. FSU turned down a hosting last year, USC didn't have an acceptable venue one year).
I have just become a huge LIU Brooklyn fan.
It's because NJ now allows sports betting. You can disagree with the rule, but "crazy?" meh. NJ knew this would happen when they passed the law . . .
For what it's worth, here's the bastardized Massey Ratings for the teams in the tournament, ratings converted Elo-style to the old Albyn Jones scale.
50 point rating differential: .586 win pct
100 points: .667
150 points: .739
200 points: .800
250 points: .850
300 points: .889
400 points: .941
homefield typically between 50-60 rating points
1 Stanford 2055
2 Florida St 1995
2 Virginia 1995
4 BYU 1985
5 Penn St 1950
6 San Diego St 1945
7 North Carolina 1925
7 Florida 1925
9 UCLA 1920
10 Duke 1885
10 Marquette 1885
12 Ohio St 1860
13 Texas A&M 1845
13 Baylor 1845
15 Wake Forest 1835
16 Virginia Tech 1830
17 California 1825
17 Maryland 1825
19 Tennessee 1815
20 Santa Clara 1800
21 Portland 1795
22 Pepperdine 1785
22 Michigan 1785
24 Boston College 1780
25 Miami (Florida) 1775
26 Washington St 1770
26 UCF 1770
28 Missouri 1765
29 Georgetown 1745
29 Notre Dame 1745
29 Texas Tech 1745
32 Cal St Northridge 1735
35 Kentucky 1715
35 Arizona St 1715
35 Illinois 1715
38 West Virginia 1710
39 Washington 1705
40 Wisconsin 1700
40 Auburn 1700
42 Denver 1695
43 Princeton 1680
44 Colorado College 1665
48 CS Long Beach 1655
48 Miami (Ohio) 1655
52 La Salle 1650
55 Stephen F Austin 1635
61 Central Michigan 1620
66 Rutgers 1610
69 Utah St 1590
94 North Texas 1525
100 Hofstra 1505
106 Colgate 1480
106 Florida Gulf 1480
117 Illinois St 1455
121 Wisc. Milwaukee 1445
139 Idaho St 1405
139 Radford 1405
143 Loyola MD 1395
146 Tenn-Martin 1390
146 Oakland 1390
149 Stony Brook 1380
165 Long Island 1340
190 Georgia Southern 1295
312 Miss Valley St 740
Highest-rated teams to not get a seed:
Texas A&M (Massey rank #13, Elo-ized rating 1845)
Virginia Tech (#16, 1830)
California (#17, 1825)
Tennessee (#19, 1815)
Santa Clara (#20, 1800)
Highest-rated teams to not make the bracket:
Minnesota (#33, rating 1730)
Oregon St (#34, 1715)
Louisville (#45, 1665)
San Diego (#45, 1665)
Dartmouth (#47, 1660)
Iowa (#50, 1650)
South Florida (#50, 1650)
Lowest-rated teams to earn an at-large berth:
Denver (#42, rating 1695)
Colorado College (#44, 1665)
Cal State Long Beach (#48, 1655)
Central Michigan (#61, 1620)
Rutgers (#66, 1610)
Between Minnesota, the highest-rated team to not be awarded an at-large berth, and Rutgers, the lowest-rated team to be awarded a berth, there is a difference of 33 places in the rankings and a difference of 120 rating points, which corresponds to an expected winning percentage of about .700 in Minnesota's favor on a neutral site.
It took me all day yesterday and into last night and this morning, but I've used the system I previously had set up to see what I think the at large selections should have been, based on the only criteria the Committee is allowed to consider. I'll summarize the system here, but there's a detailed explanation of it at the RPI for Division I Women's Soccer website, in relation to the 2011 at large selections and seeds, here: https://sites.google.com/site/rpifo...r/how-did-the-committee-form-the-2011-bracket.
I set the system up to systematically apply the applicable criteria to the teams I believed were in the bubble group (more about this year's bubble group farther down). It is a purely mechanical system, involving no subjective judgments with two exceptions. I made it as mechanical as possible in order to minimize any chance of my personal biases and preferences coming into play.
The system compares each bubble team to each other bubble team in a "one on one" comparison. It initially looks at the three NCAA primary criteria: RPI (which has subcategories), head-to-head results of the two teams against each other, and results against common opponents. It weights each of these three criteria equally. (Assembling the common opponent results is what makes the process take a long time.) In looking at head-to-head and common-opponent results, the system follows a particular rule related to ties: for the home team, a tie is treated as a loss; and conversely for the away team, an away tie is treated as a win.
Within the RPI criterion, the system looks at three subcategories: Adjusted RPI, Adjusted Non-Conference RPI, and finishing position within the conference when comparing teams from the same conference. The system weights each of these subcategories equally. (I include finishing position within the conference because in the Pre-Championship Manual, the description of what the Committee looks at under the RPI criterion includes this.) For finishing position within the conference, the system looks at both regular season placement and conference tournament placement, weighting them equally.
After completing the "one to one" comparison of Team A to Team B for each of the three primary criteria, one team comes out ahead or the two teams are tied. The system then does an overall tally to see how many of the bubble teams each bubble team came out ahead of. This gives an initial ranking of the teams. If a team comes out ahead of all other teams, it is "in"; and if it comes out behind all other teams, it is "out." This happens sometimes, but seldom. The system makes one exception, however: It will not put a team "in" or "out" if the only basis for comparison is the RPI.
After the initial selection of "in" and "out" teams, if there is any clear initial selection, the system then looks at the two secondary criteria as applied to the remaining teams: record against teams already selected for the bracket (including automatic qualifiers rated #75 or better by the ARPI) and record over the last eight games. For both of these criteria, it considers both record and strength of opponents. The second of these criteria always has been something of a mystery to me, and my conclusion is that its only proper use is to allow consideration of recent games if a team has absolutely fallen apart or jumped miles forward in the last portion of the season. (I don't particularly like this criterion because it does not value all games equally, but its use might be legitimate if limited to extreme circumstances.) For the first criterion, having applied it now for several years, I'm convinced it is looking to see the level at which teams have shown they're able to compete -- meaning the emphasis is on good results (wins or ties) against highly ranked teams. These two criteria are the ones that require the exercise of subjective judgment. The "last eight games" one is easy, as I look only to see if any team has done spectacularly well or poorly over the last eight games. In four years, I have yet to see a team to which this criterion applies as I use it. If no team fits the criterion, all teams get assigned a "0" value. The "results against teams already selected" is the more difficult one. The system generates a list of each team's results against teams already selected (higher ranked than the bubble group plus automatic qualifiers ranked #75 or better). I then use my own subjective judgment to arrange the teams in order. In making my judgment, I don't care about losses. What I care about is the best portfolio of good results, meaning wins and ties. Then, once I've ranked the teams and entered the rankings into the system, the system compares the teams' ranks and, for each team, determines who it is ahead of, who it is equal to, and who it is behind.
The system then adds the two secondary criteria to the three primary criteria. The system weights the two secondary criteria together (ordinarily reflecting only the "results against teams already selected" criterion) as equal to the three primary criteria together. It compares each team to each other team, determining based on all criteria how many teams each team comes out ahead of.
Based on how many teams each team comes out ahead of, I then identify a logical "cut" point either to put teams "in" or to put teams "out." Once I've put teams "in" or "out," if it's still too close among the other teams, I do another comparison of the remaining teams to each other to re-rank them. I keep doing this until I have identified the right number of teams to be "in."
Ordinarily, I have a rule I use for setting up the "bubble" group to which I will apply the system. Once we know the automatic qualifiers, I set the center of the bubble as the last team that would be "in" if the Committee used only the ARPI as the decision-maker. This year, that team would have been Rutgers at #47. I then take the 7 teams on each ranking side of the bubble center and those 14 teams plus the center comprise the bubble, a total of 15 teams in all. If, in identifying the teams on either side, I encounter an automatic qualifier within the group, then I extend the bubble on that side to achieve a total of 15 bubble teams. So, ordinarily, what I would come up with this year would be the following:
Higher ranked side of bubble:
Colorado College #46
Washington State #44
[Princeton #41 AQ]
Long Beach State #40
Arizona State #39
Lower ranked side of bubble:
Miami FL #50
[Cal State Northridge #51 AQ]
South Florida #52
[Florida Gulf Coast #53 AQ]
[North Texas #54 AQ]
Oregon State #56
[Utah State #57]
William & Mary #58
Although I didn't have a cheat sheet, as I was going through this, the NCAA announced the bracket. I didn't know the details, but I knew that Dartmouth had not gotten a selection. Miami OH was #38 and an AQ, so Dartmouth #37 would have been the next team in the bubble on that side. I decided to add Dartmouth to the bubble group, to see how the system worked with them in it. And, I thought about why the Committee might have added them to the bubble. I'm making a guess that, in looking at the bubble, they saw the four AQs in the "poor" end of the bubble as compared to the one AQ in the "good" end of the bubble and felt the "poor" end, due to the four AQs, was extending too far down in the rankings. So, they dropped William & Mary out of the bubble at the poor end and added Dartmouth at the good end. That would be a reasonable decision and possibly even a very good one. What this would have meant is that the center of the bubble moved one spot towards the good side so that instead of selecting eight teams from the 15-team bubble, they would be selecting nine teams. They might even have taken a quick look at William & Mary, which would have lead to an easy conclusion that they were not going to be selected under any circumstance and weren't really even a legitimate contender for an at large position. Dartmouth's #37 is a pretty good ARPI to be a bubble team, but in recent years a team ranked as high as #38 did not get an at large selection, so #37 as a bubble team is not extraordinary.
With all of that as background, here is what my system came up with:
After doing the primary criteria, there were no clear "in" teams. William & Mary clearly was "out," and had no affect on how any of the other teams related to each other. Again, all of the process to this point was mechanical.
For the "results against teams already selected" criterion, I ranked the teams as follows (leaving William & Mary on the list although with no affect on the process):
2. Miami FL
3. Long Beach State
4. Washington State
7. Colorado College
9. Oregon State
10. Arizona State
11.5 South Florida
14.5 William & Mary
After adding the rankings to the system, seeing who came out ahead of whom again based on all the criteria, and determining how many teams each other team now came out ahead of, my first run produced the following results:
Long Beach State came out ahead of 13 teams
Washington State 12
Colorado College 7
Miami FL 6
Arizona State 5.5
Oregon State 4.5
South Florida 3
William & Mary 0
At this point, I had to decide where to establish the "in" point. I decided to establish it at Miami's 6, so that Miami and all the teams with higher number were "in," for a total of 7 teams in and 2 remaining to be selected.
I then eliminated the "in" teams from the comparison process and did a comparison of the remaining teams. It came out as follows:
Arizona State 5.5
Oregon State 4.5
South Florida 3
William & Mary 0
Based on this, Arizona State and Auburn got the remaining 2 "in" selections.
The system's "in" selections exactly match the Committee's.
This is the third year in a row that the system has produced the Committee's at large selections. The one quirk this year is in the identification of the bubble pool, so my previous thinking about how the Committee defines the pool was not exactly right. I'm sure the Committee does not go through the exact systematic process that my system does, but I believe it's pretty clear that the system captures the essence of the Committee's process. I believe it's also pretty clear that the Committee is following the NCAA's mandatory criteria very religiously when it comes to the at large selections.
I haven't run the seeding process through my system yet, but now can start on it.
Chris, as always, great work in breaking down the selection process. Thanks for all of your time and effort.
Can someone with some know-how create a prediction board.
Let's have a contest and see who comes closest to predicting winners/Final Four/ Champion.
Like we do every year.
Thanks Morris20, knowing college sports betting in NJ makes sense now. I looked up the info myself now too: http://espn.go.com/college-sports/s...ing-new-jersey-championships-due-betting-laws
Looks like this was just passed less than 3 weeks ago. So programs like Rutgers and Princeton, along with the 6 other Division I programs in NJ will never be able to host while this is in effect. I am sure programs within 350 miles in NY (where Rutgers is going), WV (where Princeton is going), MA, CT, MD, DE, RI, D.C., northern VA, and PA don't mind this rule at all. It affects DII and DIII schools as well.
And yet UNLV has hosted tournament events plenty of times....
I now also have posted my weekly comparison between the NCAA's RPI and my "Improved" RPI intended to help minimize the RPI's regional problems, at the webpage mentioned above. I haven't done any analysis, but a quick glance indicates to me there would have been little, if any, change in the at large selections. Central Michigan would have been a bubble team. I don't know how they would have fared in the bubble process. (The Improved RPI rankings actually support some of the Committee's 2012 at large decisions.)
And if they fall, Boston College. Not impossible, they tied at Stanford early in the season.
Separate names with a comma.