Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Report / In Depth

Ranking Relevance

Which Universities Rise and Which Fall in International Relations?

University of Michigan
Shutterstock

Since World War II, policymakers have expressed consistent interest in drawing upon academic social science expertise to improve national security policymaking. Despite the best of intentions, attempts to bridge the “theory-policy gap” have been fraught with frustration for those on both sides of the divide.

While many factors increase the gap between the academy and the policy world, one of the most important is the growing salience of rankings of scholarly programs. Rankings incentivize policy irrelevance in the social sciences in favor of strictly scholarly criteria of excellence. The “gold standard” among them is the National Research Council of the National Academy of Sciences (NRC) ranking of “Doctoral Programs in Political Science.” In our initial work, we showed how the NRC methodology was systematically biased against policy-relevant work in international relations. We suggested some preliminary ways of getting a better sense of which of the top 50 political science departments were doing well in terms of fostering policy engagement among their faculty.

Since then, we have added a broader range of policy engagement measures (publication in policy-relevant journals and other indications of engagement outside the Ivory Tower) and applied them to other subfields of political science. Below we summarize our top-line findings for the political science subfield of international relations.

The Tarnished Gold Standard

In its most recent major effort to rank universities released in 2010, the NRC sought to replace simple “reputational” rankings like those of U.S. News and World Report with a more “scientific” assessment of doctoral programs. Unfortunately, the NRC rankings measure academic quality very narrowly. They credit only work published in disciplinary journals and exclude academic work published in books and non-peer-reviewed publications. Just counting scholarly publications rewards those who are prominent in self-referential academic discourse and gives little or no credit to scholars who address broader issues and audiences.

Moreover, many of the most policy-oriented scholars in international relations publish in books as well as scholarly articles. Ignoring books understates the impact of many leading scholars. For example, the late Samuel Huntington of Harvard amassed a staggering 15,335 Social Science Citation Index hits. However, 11,458 of them—nearly 75 percent—were to his many books rather than his journal articles. If we excluded books from any analysis of Huntington’s scholarly impact on the discipline of political science, we would be greatly underestimating it.

Excluding books from the rankings also means that scholars have little incentive to write in the depth required for policymaking, and in an accessible fashion that might appeal to a wider audience. Because of their length, books allow for the investigation of issues in much greater depth than articles. Many of the most pressing policy issues require such in-depth analysis to produce useful information for policymakers. Books also have to be more accessible to a broader audience because even university presses depend upon sales to stay in business, a market pressure that most scholarly journals do not face.

The new rankings we present here score departments based on the publications of their faculty in policy journals like Foreign Affairs, Foreign Policy, the Atlantic, and the New Republic, among others. It also ranks faculty based on the number of appearances they and their work have in the print media as recorded by Lexis Nexus since the year they received their Ph.D. Finally, it ranks departments based on the number of times that its faculty have provided testimony to Congress. All of these measures are strong indicators of a department’s engagement with the broader public discourse outside of its academic silos.

With the continuing support of the Carnegie Corporation of New York, we have expanded our initial efforts by adding a broader range of relevance measures and applying them to a greater number of departments. We have also applied our rankings beyond just the sub-field of international relations to include other parts of political science, including American politics, comparative politics, and political theory.

Gathering the Data and Unpacking the Rankings

One broader measure of academic impact we highlight here is our Academic Book Ranking. Because of the importance of books to the study of political science, our new rankings count the number of books that an international relations (IR) scholar has produced and weights each based on the impact of the press with which the book was published.

We also gathered data across a series of measures of policy engagement that sought to determine the extent to which IR scholars are engaged in broader policy debates and issues. We gathered the citations count for publications in policy journals that tend to focus upon contemporary policy issues such as Foreign Affairs, Foreign Policy, Journal of Democracy, Wilson Quarterly, Atlantic, the Nation, New Republic, New York Review of Books, American Prospect, Weekly Standard, National Review, Policy Review, Regulation, Reason, and the American Conservative. In addition, we awarded IR scholars a Media Profile Score based on the number of times they or their work appeared in the print media, as recorded by LexisNexis, since they received their Ph.D. We also ranked departments based on the number of times that IR faculty testified before Congress. Finally, we ranked departments based on the number of times their faculty members were offered the Council on Foreign Relations International Affairs Fellowship, an opportunity to spend a year in government engaging in policy work. Averaging a department’s rankings across these four measures of policy engagement generated a schools “Policy Engagement Ranking.”

Visualizing the Data

We provide a number of ways to visualize the findings across the different measures of academic impact and policy engagement. The table below displays the relationship between a school’s NRC ranking and how that compares to their ranking in Academic Books and Policy Engagement. It is also possible to sort the schools from highest to lowest or lowest to highest ranking by clicking on the indicator title at the top of the column.

The arrow plot below allows you to select one of our 17 indicators to see how the schools’ rankings on that indicator diverges from their NRC rankings. A red arrow indicates that the school ranking on that measure declines relative to its NRC ranking. A teal arrow indicates that the school’s ranking in that category increases relative to their NRC ranking. The longer the arrow, the greater the divergence between that indicator and the school’s original NRC ranking.

The connected dot plot below enables users to select two indicators to see the divergence within schools along those indicators. For instance, one could select NRC ranking and Policy Journals to see how large a divergence there typically is between a school’s rankings on these two measures. The longer the line between the two dots, the greater the divergence. This can also help visualize how wide the general divergence is across all the schools on those two indicators.

The following heat map visualizes each school’s score for each individual indicator. Darker shades of blue represent a higher score for a given school on that measure. When one hovers over the rectangle, the name of the school, the indicator, and the ranking of that school on that indicator appear. The schools are ordered in the order of their NRC ranking.

The final data visualization is a data table where users can search for a school to see how it ranks across all 17 measures. Like the first data table, schools can be sorted by a specific indicator by clicking on the name of the indicator at the top of the column. This more detailed data table makes it possible to see how rankings across more fine grained measures produced the ranking in the aggregate measures like Policy Engagement.

Key Findings

  • If we highlight those departments that rose or fell more than 10 places in our rankings from their NRC position, we see that only 25 percent (13 of 52) of the top 50 departments remain unaffected. Focusing on a 10-place shift strikes us a conservative criterion for impact. Such a move often pushes schools into or out of the top 10.
  • The effect including broader criteria of scholarly relevance like books and policy engagement is striking. Fifty percent of schools rose and an equal number fell in the rankings (26 of 52) as a result of including books. Even more significantly, over 56 percent of departments changed position (29 of 52) as a result of factoring in the Policy Engagement Index. Seventy percent of the original top 10 were affected in one way or another by these changes in approach to rankings.
  • The few top 10 departments that do well across the board include MIT, Princeton, and UC-Berkeley.
  • Big losers when books are factored in include Stanford, Harvard, Columbia University, and UC-San Diego. They are replaced by Duke, Northwestern, Indiana University, Stony Brook University, and UC-Santa Barbara. When looking at policy reengagement, Harvard, Yale, and NYU fall out of the top 10 and are replaced by Georgetown University, George Washington University, University of Maryland, and University of Pennsylvania.
  • Geography clearly seems to be a factor in broader relevance, with inside-the-Washington, D.C.-beltway schools moving up considerably. Conversely, departments located in small college towns tend to underperform in this regard, as is evidence from the low Policy Engagement scores of University of Rochester (41), Penn State (43), Texas A&M (47) and Florida State (49).

We have many different measures of scholarly impact and policy engagement. In addition to assessing the international relations sub-field of political science, we have also collected data on the subfields of comparative politics, American politics, and political theory.

Our point is not that any one of these particular measures is necessarily better than another. Rather, we regard excellence as consisting of a broader range of scholarship as well as impact beyond the Ivory Tower. So we invite all to look at the different ways we can look at this data and see what individuals think it tells us about academic rankings and which measures are most important.

Why Relevance Matters

Foreign policy decision-makers, the media, and the general public should care about academic rankings because they are increasingly pushing international relations scholars to the sidelines in policy debates. In fields of potential interest to foreign and defense policymakers—particularly political science’s subfield of international relations—academic rankings encourage scholars to focus on narrowly academic indicators of excellence and ignore the concerns of policymakers, members of Congress, and the citizens who pay the taxes that ultimately fund university research. In other words, the state-of-the-art approach to ranking American universities disadvantages precisely the sort of scholarship that is of most use to policymakers and which speaks to the broadest concerns of non-specialists.

Indeed, the influence of rankings can be pernicious when they are exclusively focused on things of narrowly academic interest. Growing numbers of academic leaders have become convinced that universities rise in the rankings by following dominant intellectual fashions, not by charting their own distinctive course. They increasingly believe that they have to conform to a single model of intellectual excellence to get ahead.

There are at least three reasons why we should care if academic political science departments define excellence narrowly and rank themselves into irrelevance. First, despite all of the incentives in the academy for professors to retreat into intellectual isolationism and otherwise hide in the Ivory Tower, there is still much useful work being done in the academy that should be of interest to policymakers and the rest of society.

Second, if civic-mindedness is not sufficient to encourage scholars to think about how they can become more relevant, self-interest should be. In tight economic times, federal support for universities is coming under renewed Congressional scrutiny.

Finally, there is a grudging but growing recognition, even among scholars committed to using the most sophisticated research methodologies, that theory and practice are inextricably linked. Good theory contributes to good policy and bad theory can hinder good policy. At the same time, engagement with real-world problems produces better theory. The case for a broader approach to scholarly excellence that includes real-world relevance also rests on the contribution relevance can make to better scholarship.

In conjunction with New America, we are presenting our revised rankings of international relations programs in the top 50 political science departments and the data from which they are drawn. Our hope is that they will not only generate further reaction among scholars and university administrators but also will prompt the philanthropic community, think tanks, journalists, and policymakers to use them to identify which 50 political science departments are doing the broadest and most useful scholarship. We anticipate that this exercise will contribute to broadening the definition of scholarly excellence beyond traditional indices to include the societal impact of the discipline.

What Is to Be Done?

Three implications follow from our conclusions: First, we need to recognize that our current approaches to academic rankings encourage professors to, in the words of Hans J. Morgenthau, “retreat into the trivial, the formal, the methodological, the purely theoretical, the remotely historical—in short, the politically irrelevant.” Employing a broader set of criteria should encourage them to open the windows of the Ivory Tower and cast their scholarly gaze toward the wider world. Simultaneously, our new rankings give non-academics an opportunity to peer in and see which schools and departments are reaching outside their walls to engage with their fellow citizens.

Second, policymakers, philanthropists, journals, and the general public should take all rankings (including ours) with a grain of salt. Even the most sophisticated rankings, like those of the NRC, have flaws and biases and often miss those things non-academics care most about.

Finally, assessing departments and universities using narrow and parochial criteria of academic excellence is likely to further disconnect scholars from the concerns of policymakers and the rest of their fellow citizens. In becoming so disconnected, the social sciences violate their implied social contract with society if they fail to address the concerns of our wider community. When we ignore them, we need to ask whether we’re just ranking irrelevance?


Design and Data Visualization: Ellie Budzinski and Loren Riesenfeld

Corrected at 10:50am November, 27: Due to an editing error, the second-to-last bullet of the Key Findings previously said Penn State rose in the policy reengagement rankings. It is the University of Pennsylvania that rose in the ranking.

More About the Authors

Peter Campbell

Department of Political Science, Baylor University

Michael Desch

Department of Political Science, Notre Dame

Programs/Projects/Initiatives