Across the country, higher education officials are pouring over copious amounts of data, bracing for the release of the country’s most respected survey and ranking of Ph.D. programs later this week.

The rankings, which are assembled by the National Research Council of the National Academies, will be released publicly on Tuesday at 1 p.m.. That release date comes several years — yes, years — after the rankings were originally scheduled to be released. And with a new methodology in place, questions remain about how well the rankings will be received within the field of higher education.

In an e-mail interview over the weekend, University President Emeritus James Duderstadt said past rankings have carried a great deal of weight at universities across the country, and have been an important factor in determining universities’ priorities.

“Done correctly, they can be very valuable,” Duderstadt wrote of the rankings. “Both the 1982 and 1995 rankings influenced University decisions about investment (and, in a few cases, disinvestment).”

However, Duderstadt said the new methodology being used in the survey means it may be difficult for institutions to accurately compare previous rankings to the new set being released on Tuesday.

“One of the problems with the new evaluations is that they do not connect well with the earlier efforts and hence are unlikely to provide a useful measure of progress (or deterioration),” Duderstadt wrote.

The University has done well in the past two NRC rankings, with approximately 80 percent of its Ph.D. programs rated in the top 25 percent nationally each time. But, the new criteria could alter the perceived quality of some programs.

Among the new criteria, the NRC has divided quality into three distinct categories: research impact, student support and outcomes and diversity of academic environment.

According to the NRC’s website, factors being considered within the research impact cluster of quality include the number of research citations per faculty member, the number of publications per faculty member and the number of honors and awards won per faculty member.

The student support and outcomes evaluation cluster will be based on factors including the average time to degree completion, the percentage of students with full tuition support and the program’s attrition rate.

The diversity of academic affairs category in the rankings will be based primarily on the fraction of minority and female students and faculty, among other factors.

Individual factors within each of the clusters are combined using a weighted system to produce an overall score for each program, which is then compared to Ph.D. programs at other universities across the country.

“It is very extensive, both in collection and analysis,” Duderstadt wrote of the methodology. “Yet it is a new (and controversial) scheme, and the jury will be out for some time as to whether it is useful or not.”

Duderstadt also wrote that the changes in methodology have caused the survey, traditionally released approximately once a decade, to be delayed several years.

“Most of the delay of the current evaluation (three years) had to do with fine-tuning the methodology into a format acceptable to the universities,” Duderstadt wrote.

Paired with the new methodology, that delay has left many programs eagerly awaiting the new rankings. The rankings were released to institutions earlier this month, though they won’t be publicly available until Tuesday.

Though the rankings have traditionally been very important to leaders in higher education, Duderstadt said the rankings are not nearly as important to students or industry leaders.

“While faculty and academic administrators may eventually find useful information in this analysis, I seriously doubt whether it will be very meaningful to students,” Duderstadt wrote. “I’ve always felt that the best guide to students is whether there are well-known faculty in the areas of their particular interests rather than any generic ranking of the program.”

Even then, Duderstadt said those who do consider the rankings should do so lightly.

“While these are infinitely more rigorous and, if interpreted correctly, useful than the more popular ‘league tables’ such as U.S. News & World Report, QS, Times Higher Education or Shanghai Jaio Tong, they should still be taken with a very hefty grain of salt,” Duderstadt wrote.

Leave a comment

Your email address will not be published.