The National Research Council publicly released it’s most recent set of rankings for Ph.D. programs across the country in a press event yesterday afternoon.

Infographic

Click here to see the full graphic of the University’s NRC rankings.


While the rankings — which are released approximately once each decade — have traditionally carried a great deal of weight within higher education, some leaders in the field are criticizing this year’s rankings, arguing that they don’t say much about the change in the quality of Ph.D. programs across the country.

A major reason for the criticism stems from a dramatic change in the methodology used to calculate the rankings — which delayed the release of this most recent set of NRC rankings by three years. In addition, critics are concerned that the new system presents the rankings in ranges, instead of as a single numeric rank for each program at each institution as was the case in the past.

With the new methodology and the new format for presenting programs’ rankings, it is increasingly difficult for even higher education officials to say with confidence whether their programs have improved or declined relative to the last NRC ranking, which was released in 1995.

In 1982, 87 percent of the University of Michigan’s Ph.D. programs were in the top quartile nationally. That prestigious percentage fell in the 1995 rankings, when only 79 percent of the University’s Ph.D. programs ranked in the top 25 percent nationwide.

Due to the variance of the new range system used by the NRC, today’s figures found that anywhere from an astounding 95 percent to a dismal 20 percent of the University’s Ph.D. programs ranked in the top 25 percent nationally.

In an exclusive interview with The Michigan Daily before the rankings became publicly available, Rackham Dean Janet Weiss discounted the emphasis often placed on program rankings.

“It is what it is,” Weiss said of the NRC’s rankings of University Ph.D. programs. “It’s a product of a particular methodological approach. The methodological approach has its advantages and the methodological approach has its disadvantages.”

“I think the message the NRC is intending to convey and certainly the message that I come away with is that this doesn’t tell us very much about how our programs rank in a sense of any very close analysis of how we compare to our peer institutions,” Weiss later added. “It really is most valuable for telling us something about how these individual data elements compare to other institutions.”

For instance, Weiss said University officials could examine the raw data for criteria like the time to degree completion among peer programs to determine whether the University’s program was on par with its colleagues across the country. Then, officials could determine what action, if any, needs to be taken to improve in the areas that the University values most.

“I think what most of what we glean from (the survey and rankings), in very broad strokes you can see which of our programs are very strong nationally and which of them are more toward the middle,” Weiss said. “We don’t have any programs, fortunately, down toward the bottom of the distribution.”

And while it’s true that the University is extremely competitive on virtually all fronts, there are some programs that could have significant room for improvement.

The astronomy and astrophysics Ph.D. program was ranked between the 13th and 32nd best in the country, though only 33 programs exist nationally. The University’s interdisciplinary program in cell and developmental biology also received less than ideal marks, being placed between the 74th percentile and the 14th percentile nationally.

Other programs, like the University’s Ph.D. program in public policy can’t easily be classified as doing well or poorly, being placed between the 9th percentile and the 89th percentile nationally.

In the interview last week, Weiss remarked that she believed the University’s Ph.D. program in statistics had made clear improvements, while its Ph.D. programs in anthropology and classical studies had seemed to drop in the rankings. But in an e-mail interview yesterday morning, Weiss said she’s not interested in the rankings because of limitations surrounding the data.

She pointed out in the interview last week that the rankings released by the NRC yesterday could arguably already be out of date since the data used to generate the rankings was drawn from the 2005-2006 academic year. Since that time, many schools have seen significant turnover in faculty — which could alter both the raw data collected for the rankings and the perception of how important each criteria is to faculty members in a particular field.

“If you did this over again today, you would get different results because a lot of this is based on the individual faculty members,” Weiss explained. “The thing that is going to be very useful for us is not the rankings … it’s the data.”

While the methodology consists of complex statistical analysis and different weighting systems that produce separate ranges of rankings for each Ph.D. program at each school, the analysis can be boiled down to two different general processes.

The first, which generates what the NRC calls its S statistic, uses a survey of all faculty members at all universities within a specific Ph.D. field to measure how important the faculty believe each of the 20 criteria evaluated is to the overall success of a Ph.D. program in that field.

Those weights are then used to generate 500 different rankings with random variation for each program. The NRC then eliminates the top 5 percent and bottom 5 percent of the resulting range of rankings for each school’s program in that field to determine the individual program’s ranking.

By eliminating the top 5 percent and bottom 5 percent, the NRC was able to eliminate outliers in the range of possible ranges for each program while still ensuring a 90-percent confidence in the figures released, Weiss explained.

However, to eliminate potential bias in what faculty members say they value against what they actually value in the quality of Ph.D. programs, the NRC also used a second method of evaluation to generate an alternative set of rankings.

In this process, which generated what the NRC called its R rankings, faculty across the country ranked the quality of peer programs using a one to six scale. The data was then analyzed through a regression analysis to determine how strongly related each of the 20 criteria evaluated were to the perception of quality in the eyes of fellow faculty members.

Those correlations determined the weight awarded to each variable and the results were combined so that half of the responses were included randomly to generate 500 results for each program. The NRC again eliminated the top 5 percent and bottom 5 percent of the results to generate a range of potential rankings for each university’s programs in the field with a 90-percent confidence rating.

NRC officials had originally proposed eliminating the top 25 percent and bottom 25 percent of the results from both the R and S rankings to generate a smaller range for each program’s ranking. However, doing so could have resulted in less accurate rankings because the ranges would have only represented a 50-percent confidence rating and ultimately the NRC decided to broaden the ranges published.

———-

Below is a list of the 20 criteria used by the NRC to evaluate the quality of Ph.D. programs across the country. The weight assigned to each criteria was determined on a program by program basis using a survey of faculty members in the field and a statistical regression analysis.

1. Ratio of publications per faculty member
2. Ratio of citations per publication
3. Percent of faculty with grants
4. Percent of faculty who are interdisciplinary
5. Percent of non-Asian minority faculty
6. Percent of female faculty
7. Ratio of honors and awards per faculty member
8. Average student score on the quantitative section of the Graduate Readiness Exam
9. Percent of first year students with full support
10. Percent of first year students with competitive external funding
11. Percent of non-Asian minority students
12. Percent of female students
13. Percent of international students
14. Average number of Ph.D.s awarded per year between 2002 and 2006
15. Percent of students who complete their Ph.D. within six years (eight years for students in the humanities)
16. Average time to degree completion
17. Percent of students who secure academic jobs after graduating
18. Availability of student work space
19. Percentage of students with health insurance
20. Number of professional development activities offered

Leave a comment

Your email address will not be published.