Abstract
|
Rankings and bibliometrics have become increasingly common in science policy and university management. All too often, such ‘rankings’ are taken at face value, despite their known methodological shortcomings. In this paper, we analyse the effect of measurement assumptions, data availability, and performance definitions on the ranking of public administration departments.
In public administration, we have seen a number of publications that have concentrated on compiling lists of ‘best journals’ (Forrester & Watson, 1994; McLean et al., 2009), most prolific scholars, best departments, the effects of collaboration on productivity (Corley & Sabharwal, 2010), or drivers of excellence (Schroeder et al., 2004) . Still, the topic has received far less attention than it has in other fields, and public administration is often combined with political science in current analysis (Garand and Giles, 2003). An important reason for this is the relatively small size of the discipline, and, more importantly, discussions about the nature and identity of the discipline. One element of this discussion concentrates on whether Public Administration is a scientific or a professional discipline (Wright, 2011; Rodgers & Rodgers, 2000). Another element is whether Public Administration should be considered a separate field, or an interdisciplinary field. In the latter approach debate exists on which discipline then dominates the field – political science, organisational behaviour, law, sociology or management. Different positions in these debates may lead to quite different ‘rankings’ of best performing departments, because the performance criteria are different.
In our paper, we run an analysis in Web of Science and in Scopus on who published in English-language Public Administration journals in the period 2008-2010. The analysis in WoS looks at 4691 articles in journals with an SSCI Impact Factor. The analysis in Scopus uses a different subset of journals, based on the research by Bernick and Krueger (2010)
We show how Public Administration departments worldwide compare, and give explanations for the differences in the rankings by looking at data selection, data quality, and the different definitions of ‘performance’ used in the two analyses. We end by highlighting methodological and data constraints in such comparisons, and warn for the homogenizing tendencies inherent in such exercises, as well as the resulting strategic behaviours that may undermine the specific nature of our field.
References
Bernick, E. & Krueger, S. (2010). An Assessment of Journal Quality in Public Administration. International Journal of Public Administration, 33(2): 98-106
Corley, E.A. & Sabharwal, M. (2010). Scholarly collaboration and productivity patters in public administration: Analysing recent trends. Public Administration, 88(3): 627-648
Forrester, J.P. & Watson, S.S. (1994). An assessment of public administration journals: The perspective of editors and editorial board members. Public administration review, 54(5): 474-482
McLean, I., Blais, A., Garand, J.C., Giles, M. (2009). Comparative Journal Ratings: A Survey Report. Political Studies Review, 7(1): 18–38, January 2009
Rodgers, R. & Rodgers, N. (2000). Defining the boundaries of public administration: Undisciplined mongrels versus disciplined purists. Public Administration Review, 60(5): 435-445.
Schroeder, L., O’leary, R., Jones, D. & Poocharoen, O-o. (2004). Routes to scholarly success in public administration: Is there a right path? Public administration review, 64(1): 92-105
Wright, B. (2011). Public administration as an interdisciplinary field: Assessing its relationship with the field of law, management and political science. Public Administration Review, Jan/Feb: 96-101
|