According to Leiter, the biggest movers of 2014 are the following, along with their numerical scores from both the 2011 and 2014 versions of the Report (I omit Saint Louis University, which was not evaluated in 2011):
Yale University (from #7 to #5, occupying that spot by itself)
Yale 2011 mean score: 4.0University of Southern California (from #11 to #8, tied with Stanford)
Yale 2014 mean score: 4.1
USC 2011 mean score: 3.8University of California at Berkeley (from #14 to #10, tied with others)
USC 2014 mean score: 3.9
Berkeley 2011 mean score: 3.7University of California at Irvine (from #29 to #24, tied with others)
Berkeley 2014 mean score: 3.8
UCI 2011 mean score: 3.0Washington University in St. Louis (from #31 to #24, tied with others)
UCI 2014 mean score: 3.0
Wash U 2011 mean score: 2.9University of Virginia (from #37 to #31, tied with others)
Wash U 2014 mean score: 3.0
UVA 2011 mean score: 2.7University of Connecticut, Storrs (from #50 to #37, tied with others)
UVA 2014 mean score: 2.8
UConn 2011 mean score: 2.3Of the "big movers" that were included in the 2011 survey, only UConn's mean score has significantly improved. All of the others improved by a trivial margin of 0.1, except the University of California at Irvine, whose mean score stayed exactly the same.
UConn 2014 mean score: 2.7
The bulk of the rankings are densely packed and ties are common, which means that apparently substantial jumps in ordinal rank can be caused by disproportionately negligible changes in mean evaluator score, or, in the case of UC Irvine, by no change whatsoever. In the case of UCI, what actually happened was this: Indiana and Duke fell from 3.1 to 3.0, UMass and Ohio State fell from 3.1 to 2.9, and Colorado fell from 3.1 to 2.8. None of these departments changed by very much—two by 0.1, two by 0.2, and one by 0.3 (Leiter suggests that differences of 0.4 or less are unimportant)—but it was enough to cause UCI to jump five spots and create the illusion of a substantial improvement.
Kieran Healy's analysis of the 2006 PGR data showed that "in many cases" differences of 0.1 were "probably not all that meaningful." This is the only time I'm aware of that any attempt has been made to perform this kind of analysis on Leiter's data, and although Leiter says Healy will be calculating confidence intervals for the 2014 edition, those calculations are unfortunately not yet available. But on the assumption that the 2014 numbers are similar to their counterparts from 2006, there is reason to doubt whether these differences of 0.1 or less represent actual differences—which means that almost all of the departments Leiter has singled out as "big movers" haven't actually moved at all. In all but one of the cases Leiter singled out, the 2014 survey didn't measure movement.
And so, as I have said before, there is a general problem with this kind of ordinal scale in that it fails to accurately represent the differences between ranked departments. As another example, the most recent data has NYU as the best-ranked department with a mean score of 4.8, which is better than #6-ranked Harvard and Pittsburgh by a margin of 0.8. That same interval of 0.8 also separates the sixes from UC San Diego, which comes in at #23. I, for one, find it impossible to look at the PGR and see these differences accurately. To my eye, the way the information is presented significantly understates the difference between NYU and Harvard/Pitt, and dramatically overstates the difference between Harvard/Pitt and UCSD.
Finally, I should say that I was glad to read that Kieran Healy will be calculating confidence intervals this time around. I think that information would be helpful. However, I bristle a little bit at the attribution of this idea to a session at the 2013 Central Division APA meeting; I raised this idea in 2009.