Obviously, there's a new PGR out. I'd like to sort of renew my complaint about the way the data is presented, in terms of ordinal rankings rather than in terms of mean scores. For example, we see the ordinal rankings from previous years are reported going back to 2002, but the mean scores are omitted. This is bad because, even assuming that the procedure for assigning numbers to departments measures anything, the ordinal rankings are just a derived quantity based on, and carry far less information than, the mean scores on which they are based. For example, we might notice that Texas at Austin has fallen from 13th in '06 - '08 to 20th in the current edition. But while this drop appears to be steep, it could have been caused by any number of things: a decline in quality at UT (obviously); an improvement in by a number of neighboring departments with no decline at UT; or some combination of neighborly improvement and Texan decline. It turns out that although the seven-step drop was caused by a decline in UT's mean score, this decline - 3.6 in '06, 3.4 now - is in a range we have been lead to believe is a statistically insignificant. So, in effect, the report measured no change in the quality of UT's department.
In the current issue, NYU is ranked at #1 (with a 4.9 mean score) and Rutgers #2 (with a mean 4.6); although this is a slightly bigger difference from the previous version, we still have every reason to suspect that the .3 difference in their mean scores is insignificant. They are within the range Leiter cites as insignificant; both schools have a median score of 5, which means that more than half of the respondents gave them 5s - which suggests that we should regard them as tied. The numbers we have suggest that any difference between the two is too small to be measured by the report’s methods. So I'd much rather see the mean scores than ordinal ranks.