Introduction

In 2020 (quite recently), the implementation of Project 5100 ended in Russia. Three years have passed, but the results are still under heated discussion. Opinions are extremely contradictory: some note positive changes in the number of publications of Russian universities, while others complain about the lack of institutional changes and emergent effects, as well as the unresolved problem of the gap in funding with the world’s leading benchmarks. However, everyone agrees on one point: the main declared goal of the project was not achieved, and five Russian universities did not enter the top 100 of global university rankings. The successor of the project was the Priority-2030 program, which formally moved away from the ranking race. On the other hand, in Priority-2030, some indicators were inherited from Project 5–100, which, in turn, were designed for promotion in global university rankings (even though they have changed a bit). Therefore, the issue of rankings traditionally occupies an important place in Russian academic discourse.

Despite significant differences, there is a strong relationship between global university rankings and research evaluation. The emergence of global rankings is closely related to the neoliberal paradigm of higher education, within which, since the end of the last century, so-called excellence initiatives have emerged. By excellence initiatives, we understand a special type of political initiative aimed at increasing the global competitiveness of the national higher education system and identifying an elite group of national universities. The appearance of the first rankings is closely connected with the excellence initiatives in East Asia (primarily in China). In turn, the emergence of the first Academic Ranking of World Universities (ARWU) triggered a wave of excellence initiatives in Europe.

However, the rankings only focus on what they measure, which may not be enough to evaluate a system as complex as a university and divert attention from the really important factors. Are indicators of university rankings socially significant? Should society be focused on getting a group of national universities into the top 20 (50, 100, 200, etc.) world rankings? In my opinion, the question sounds even broader, whether universities are just commercial enterprises for the production of knowledge or whether they still have to fulfill a social mission (as understood, for example, by Bernal (1939) and Draper (1964)). Besides, it remains an open question whether we can call the practice of using university rankings in research assessment responsible in terms of basic documents in research assessment (San Francisco Declaration on Research Evaluation (DORA), Leiden Manifesto, Hong Kong Principles for Researcher Evaluation, etc.).

The goal of this academic article is to provide a thorough understanding of how global university rankings are interpreted and understood in Russia. This will be achieved through a rigorous and comprehensive review of the Russian-language academic literature, which will help shed light on the various perspectives and viewpoints emerging from Russia regarding the utilization and interpretation of global university rankings. The selection of Russian literature for this study was purposeful and strategic. The focus on Russian literature aligns with the objectives of Project 5top100 flagship excellence initiative (2013–2020), which was heavily influenced by university rankings. In fact, the project derived its name from the goal of having five Russian universities ranked within the top 100 globally by the year 2020. Besides, that part of the Russian academic discourse that is more or less closed to the international readership will be analyzed. We also highlight the problem from the point of view of global trends.

The article is organized as follows. In the next section, we provide the necessary background for the problems under consideration. The methodology is described in the Section “Data and methods”. Section “Global perspective” provides a brief current snapshot of the global perspective based on the key statements about responsible research assessment. Finally, the Section “Review of Russian literature on university rankings” provides a systematic review of the Russian-language literature on the problem. In the concluding section, we conceptualize the main findings of the paper with implications for further research and practice.

Background

The emergence of global university rankings

The emergence and wide distribution of rankings at the beginning of the 21st century was due to a number of premises. Foremost, one should note here that globalization has penetrated all public spheres, including higher education. Globalization in higher education was expressed in the growing trend of student and academic mobility between countries (in Europe, this was greatly facilitated by the Bologna system). Accordingly, students and faculty needed a transparent tool for comparison and decision-making. However, the competitive situation for universities has changed. For the first time, many universities concluded that they must consider international competitors and need a reliable international benchmarking tool. Thus, the initial intent of rankings was a marketing and benchmarking tool.

How did rankings become part of political discourse and national strategies? The neoliberal paradigm of higher education plays a direct role here, which is characterized by the marketization, industrialization, and commercialization of higher education in a global context (Yuhan, 2022). The university is turning into an enterprise to produce knowledge, more specifically, human resources and knowledge assets (intellectual property and publications). Under conditions of knowledge capitalism (Kochetkov and Kochetkova, 2021), the effective accumulation of intellectual resources (including talent acquisition) has become a priority for universities, the national system of higher education, and the state as a whole.

The neoliberal paradigm, together with shifts in the global landscape of higher education (the desire to move away from the unipolar system), has resulted in the emergence of excellence initiatives. The first such initiative appeared in China (Project 211). Achieving the goal of the initiatives (Project 211 was followed by Project 985 ([CSL STYLE ERROR: reference with no printed form.])). The aim of these policy initiatives was achieving “world-class” for Chinese universities, and this task required a benchmarking tool. Not surprisingly, the first global university ranking, the Academic Ranking of World Universities (ARWU), was released in China by Shanghai Jiao Tong University in 2003. In turn, the release of the Shanghai ranking provoked a wave of excellence initiatives in Europe (Hazelkorn and Mihut, 2021). Therefore, these two phenomena are closely related.

Hazelkorn and Mihut (2021) listed 25 global university rankings. In the framework of this study, we will not dwell on each of them in detail. Based on the scope of this study, we focused on the emergence of the Big Three (ARWU, THE, and QS), Leiden Ranking as well as rankings that use an innovative set of metrics, U-Multirank and the Three University Missions. Besides, Webometrics was added to this list because it is distinguished from other rankings by its special focusFootnote 1.

ARWU came out against the background of ongoing excellence initiatives in China (Projects 211 and 985), so its main task was to digitize the concept of a “world-class university”. The ranking is characterized by a stable methodology that exclusively uses verifiable data. The ranking is also distinguished by the presence of “luxury” indicators, such as the number of employees—Nobel and Fields Prize winners.

The first QS World University Rankings were jointly published by the British company Quacquarelli Symonds, and Times Higher Education magazine. The entry threshold compared to ARWU has been lowered because of the abandonment of luxury indicators. At the same time, 50% of the final index depended on reputation surveys. The QS ranking came under fire from the academic community from the very beginning. Therefore, in 2010 the Times Higher Education magazine released an updated Times Higher Education World University Rankings; thus, there was a “divorce” of QS and THE. It is noteworthy that THE initially switched to Web of Science publication data, but in 2014, there was a return to cooperation with Elsevier. Compared to the joint QS-THE rating, the changes affected the overweighting of subjective reputation surveys and the skewness of publication metrics towards hard sciences (Lim, 2018). Besides, the indicator of industry income was added. To the best of our knowledge, THE is the only Big Four ranking that highlights the university’s innovative activities. Nevertheless, the modest weight of this indicator of 2.5% is surprising: is it really that the university devotes only 1/40 of its activities to innovation and knowledge transfer? Of course, the answer to this question largely depends on the particular university, but it’s still surprising.

The Leiden Ranking is compiled by the Center for Science and Technology Studies of Leiden University (Netherlands). It has several significant differences from those previously described. First, it is purely a bibliometric ranking; thus, this ranking is relatively transparent compared to other rankingsFootnote 2. The second difference is the absence of a composite index as such. As a result, the Leiden Ranking is more of a benchmarking tool. The third difference is the calculation of values in the two versions: size-dependent and size-independent. The second option allows a user to level the scale effect. The ranking methodology is constantly evolving. In particular, the ranking included open access and gender indicators quite recently.

U-Multirank was developed by an international consortium in 2009 as an alternative to ARWU and THE-QS. In 2011, the European Commission decided to fund the implementation of U-Multirank. The ranking methodology stabilized in 2013. It is based on a very wide range of indicators, among which we would especially like to note the presence of a group of indicators that characterize regional engagement. Like the Leiden Ranking, it is rather a benchmarking tool: the user can compare universities in any of the areas by country, region, or subject area.

The Three University Missions is the youngest ranking in the list. It is issued by Moscow State University (Russia). In 2021, the ranking covered 1650 universities from 97 countries; thus, the Three University Missions is one of the most comprehensive global university rankings. The ranking is based on the university’s assessment of three key missions: education, science, and community engagement. Reputation surveys were excluded; however, data from the university’s public records were used.

The Webometrics ranking by Cybermetrics Lab (Spain) is released twice a year. It should be noted that this is by far the most global ranking in terms of enrollment (the latest edition includes 11998 universities). Half of the composite index depends on the number of external links to the university website; the other half depends on publications and citations. A small number of indicators (three) and content distinguish this ranking from the others.

Thus, we can classify university rankings in at least two ways: (1) the calculation of a composite index and (2) the use of reputation surveys (Fig. 1).

Fig. 1
figure 1

Classification matrix of rankings.

Most ranking providers also calculate industry or subject rankings (only Webometrics is an exception). We believe that subject rankings are more useful than institutional rankings because an “average” university may not excel in all subject areas, but it may well excel in one or two. Besides, QS and THE release regional rankings, as well as various “special” products (QS Top 50 Under 50, THE Young University Rankings, etc.). In my opinion, the latter is pure marketing in nature, even more than the main rankings from THE and QS.

We did not include the Best Global Universities Ranking (USNWR) in the review because it is popular mainly in North America. On the other hand, American universities are guided mainly by this ranking in their strategies. It should also be noted that the incomplete overlap between different ranking systems does not allow us to speak of the existence of the absolute top 100 in the world (Moed, 2017).

University rankings can be viewed in the context of research evaluation. Namely, a number of governments tend to evaluate the effectiveness of project funding by ranking positions (again, the example of the 5top100 initiative in Russia). Indeed, a significant part of the indicators of global university rankings is related to research evaluation. These indicators can be divided into three subgroups.

The first subgroup includes traditional publishing and citation metrics for which the data source is the Web of Science or Scopus (the exception is Webometrics, which uses Google Scholar data). This approach captivates transparency (though, in fact, this is not the case) and simplicity. At the same time, this approach raises a number of questions. First, can publications in journals with a high impact factor or citations be considered an indicator of research quality without content expertise? Second, publication and citation orientations stimulate manipulative behavior. For instance, Don State Technical University (Russia), in the THE-2022 ranking, received a higher citation rate than the University of Cambridge (96.9 points against 96.2 points). A preliminary analysis revealed the presence of numerous university publications in “predatory” journals, a significant part of which has already been removed from Scopus. In addition, the citation rate was achieved through increased citation of conference papers (THE uses a normalized citation rate); in other words, the university just skillfully used the weaknesses of the ranking methodology.

The second subgroup of indicators is reputation surveys, which are used by THE and QS. Although they are devoid of the shortcomings of the previous approach, the problem of bias appears. The composition of the sample of experts (national, gender, and age) significantly affects the final result. Besides, when conducting surveys, there is no way to avoid the Matthew effect, i.e., with comparable research quality, experts are more likely to vote for a better-known university.

Finally, the third subgroup consists of “luxury” indicators, such as the number of Nobel and Fields winners among university staff. The use of such indicators is objective; however, their value for evaluating “average” universities is significantly limited.

Excellence initiatives in Russian higher education

The 5–100 or 5top100 project (“5-100 Russian Academic Excellence Project”, 2018) was a government policy initiative aimed at supporting the largest Russian universities. It was launched by the Russian Ministry of Education and Science in May 2013, in accordance with the Presidential Decree of the Russian Federation No. 599, “On measures to implement state policy in the sphere of education and science.” The primary objective of the project was to enhance the global competitive position of a select group of leading Russian universities in the field of educational services and research programs. Specifically, the project aimed to have at least five universities from its participants ranked within the top 100 in either the QS, THE, or ARWU rankings.

It is noteworthy that initially, the government had set the project goal as achieving a position within the top 100 institutional rankings. However, as the project progressed, subject rankings also became a criterion for determining the project’s success. The project was designed to span the period from 2013 to 2017, with a total funding of 80 billion rubles (approximately 1 billion euros). It should be mentioned that in 2013, this amount corresponded to 2.5 billion euros.

Initially, 15 universities were selected to participate in the project out of a total of 54 applications. In 2015, an additional six universities were included (“Six new Russian universities selected for the project 5-100”, 2015). These decisions were primarily based on the evaluation of the applicants’ strategic transformation programs, which were subsequently incorporated into the winners’ roadmaps. Participants were required to provide annual reports on their progress to the Council on Global Competitiveness Enhancement of Russian Universities among Global Leading Research and Education Centers. The assessment of each university was comprised of both quantitative progress toward their stated goals and expert evaluations. Since 2017, universities have been divided into three groups based on their final scores, with the funding allocated depending on which group they belong to (“Universities of the 5–100 Program were Divided into Three Groups,” 2017).

Recent years in Russia have seen a prominent focus on the results of the 5top100 project and the efforts to advance national universities in global university rankings. These topics have been extensively covered by prominent business media outlets. For instance, Forbes Education (2020) gathered opinions from experts regarding the program. It is not surprising that the majority of these opinions were positive, considering that most of the experts interviewed were participants in the 5top100 project or its successor, Priority 2030. However, it is crucial to note that the Vice-Rector of the private Russian Economic School, Zarema Kasabieva, pointed out some negative aspects of the project. One of the significant drawbacks highlighted by the expert was the absence of a ripple effect on the entire national higher education system.

The report released by the Accounts Chamber on the implementation of the 5top100 project sparked considerable discussion and debate (“Bulletin of the Accounts Chamber”, 2021). The authors of the report acknowledged the positive effects, such as an increase in the number of publications from universities participating in the project and their improved standing in global university rankings, particularly in specific fields. Auditor Dmitry Zaitsev, the author of the report, believes that the project prompted universities to radically reconsider their roles, functions, and objectives. However, the author also had to acknowledge that the primary goal of the project, originally formulated as the entry of five Russian universities into the top 100 institutional rankings, was not accomplished.

We have already mentioned that the 5top100 project was continued within the framework of the Priority-2030 program. This ongoing policy initiative is also aimed at promoting the development and advancement of national universities. One distinct aspect of this program is its rejection of global university rankings as the sole basis of evaluation. By moving away from a narrow reliance on global rankings, the program aims to foster a more holistic and context-specific approach to assessing the performance and progress of Russian universities (more information about this initiative Kochetkov, 2022). However, a significant proportion of evaluation in Priority-2030 still depends on the quantitative indicators also used in the rankings methodologies. First of all, this is the publication score based on the titles indexed in Scopus or Web of Science.

Data and methods

This paper is an integrative literature review. The methodology of the literature review begins with an examination of the key pieces of gray literature concerning responsible research evaluation, specifically in relation to global university rankings and their application in research evaluation. This step involves reviewing relevant reports, policy documents, and blog entries in order to gather essential insights and perspectives. The selection of literature is based on our expert judgment informed by the existing body of literature (e.g., Rushforth and Hammarfelt (2022).

Following the review of gray literature, the next step involves an extensive exploration of the Russian-language literature on university rankings. This is done to address any potential gaps in the existing international knowledge base and provide valuable information and perspectives for a wider readership. The objective is to gather relevant research articles in Russian that contribute to the understanding of university rankings.

To review the Russian literature on university rankings, we used the Russian Index of Scientific Citation (RISC) to analyze the largest database of publications in Russia. The search was conducted only by the document type “journal article” for the key phrases “rankings of universities” and “university rankings,” taking into account morphology. The query returned 199 results. In the next stage, we filtered out publications that did not belong to global rankings or described the case of a particular university as well as anonymous publications and translations into Russian. The final sample comprised 64 publications.

By employing this integrative approach, the literature review incorporates both international gray literature and Russian-language literature to ensure a comprehensive analysis and synthesis of existing knowledge on the topic of responsible research evaluation and global university rankings.

Global perspective

Now we propose to look at how university rankings, or rather their use in research evaluation, are considered in non-academic literature. In 2006, the second conference of the International Ranking Expert Group (IREG) established the Berlin principles on ranking of higher education institutions (2006), which set rules in four areas: purposes and goals of rankings; design and weighting of indicators; collection and processing of data; and presentation of ranking results. Unfortunately, this framework remained mostly on paper. Further, let’s look from the point of view of the basic documents in research evaluation. Table 1 presents the key results of this analysis.

Table 1 Key controversies.

Waltman et al. (2020) suggested ten principles for ranking universities responsibly. These principles are grouped into three areas: design, use, and interpretation of university rankings. INORMS Research Evaluation Working Group (2022) also proposed a set of principles for responsible ranking, grouped into four domains: good governance, transparency, measuring what matters, and rigor.

However, we must admit that the practice of development and use of university rankings is still far removed from the principles outlined in the documents discussed above. It is interesting to note that use and interpretation are outside the responsibility of ranking compilers. Therefore, the discussion of rankings and their use (especially in the context of research evaluation and policy initiatives) should involve not only ranking compilers and universities but also the general public and academics. It is extremely important to involve governments in this discussion because, in many countries, the higher education system is predominantly state-owned, and it is government bodies that form the assessment standards (this is especially true for East Asian countries and Russia).

An important development is the Agreement on Reforming Research Assessment (CoARA, 2022), inspired by the European Commission that, as of March 12, 2023, has been signed by 487 universities. The document clearly states that rankings should be avoided in research assessment. At the same time, the drafters of the agreement admit that rankings may be used for benchmarking purposes, but in such a case, the limitations of the methodology should be acknowledged.

Review of Russian literature on university rankings

Rankings, 5top100 projects, and world-class universities

The mainstream Russian academic discourse considers global university rankings as a tool for assessing and improving the competitiveness of both universities and countries (Leonova et al. 2017). Global university rankings influence not only the development trajectory of individual universities but also the educational policies of entire countries (Puzatykh, 2019).

Moskovkin and Teng (2011) provided country-specific calculations of university publication activities based on the Taiwan ranking. Andreeva and Kisaeva (2011) considered global university rankings a tool for independent review and, at the same time, a means of measuring the quality of higher education. In addition, the authors emphasized the role of rankings in the development of competition between universities (countries).

Rodionov et al. (2013a, 2013b) conducted a comparative analysis of the THE and QS rankings. The authors considered position in the global university rankings as a competitive advantage of the university:

  • Possibility of obtaining additional public funding (e.g.,—Project 5top100)

  • Academic reputation in the international environment, which attracts foreign students and academic staff

  • Reputation with employers

Rudakova, Polyanin, and Marchenkova also considered international rankings as a competitive tool not only for universities but also for national educational systems (Rudakova et al. 2015a, c, b). The number of national universities and their positions in the rankings directly affect the volume of exports of educational services. Moreover, the promotion of national values through the expansion of the presence of national universities in global university rankings is a political tool of “soft power” (Irhin 2013a, b). Ivanov and Ivanova (2015) introduced the concept of “charts (index) power,” which again is a significant component of “soft power.” As a result of the “ranking revolution,” rankings have turned from a convenient tool to an institution that dominates the academic community. The authors also drew attention to the English-language bias as a result of the use of Scopus and Web of Science by almost all ranking producers.Footnote 3

Tatochenko and Tatochenko (2013) analyzed the relationship between university rankings and tuition fees. Moskaleva (2014a, 2014b) provided an overview of the main international university rankings as well as a number of recommendations for improving the position of the university based on publication indicators. Korzhavina et al. (2016) considered ARWU to be the most transparent and objective ranking reflecting university competitiveness. Moskovkin and Liu (2018) attempted to assess regional competitiveness through the positions of regional universities in the Webometrics ranking, as well as in the national rankings of Expert and Interfax.

Several authors have studied the problems of increasing the competitiveness of Russian universities and the national higher education system as a whole in the context of implementing the Project 5top100 excellence initiative (Arefiev, 2014, 2015; Guzikova and Plotnikova, 2014; Kushneva et al. 2014). This is quite natural because the main goal of this initiative was the entry of five Russian universities into the top 100 global rankings. In other words, it is the rankings that are considered an indicator of competitiveness. In the literature, there is also the concept of a “world-class university,” which is also viewed through the prism of university rankings (Pankova, 2015). In general, such studies aim to identify common characteristics of universities in the top 30 (50, 100) of global university rankings (Nikolenko et al. 2014). Based on the 2008 THE ranking, Milkevich (2008) identified the key components of a world-class university:

  • The level of faculty

  • The best students

  • Individual approach to the education process

At the same time, Gazizova (2015), analyzing excellence initiatives in higher education around the world, questioned the relationship between rankings and academic excellence. Bogolib (2016) drew attention to the role of the state in creating world-class universities. Frank (2017) believed that global rankings should be used only by Project 5top100 participants, while other Russian universities should use national rankings. Yudina and Pavlova (2017) argued that global university rankings are a measure of national competitiveness.

Efimov and Lapteva (2017) expanded the concept of a “world-class university” by introducing the concept of a “frontier university”. The authors defined the concept as “a university that operates at the forefront of development processes: new areas of knowledge, new technologies; social development and human development” (P.7). The authors referred to universities such as the University of Berlin, Higher School of Economics, Singularity University, and some others. The authors also proposed a typology for university rankings:

  • Model 1. Ranking based on the ideal of an academic university with a focus on basic science (QS, ARWU, USNWR).

  • Model 2. Ranking based on the ideal of the university as a center of higher education (Russian ranking Interfax).

  • Model 3. Ranking based on the ideal of the university as a partner for business—a “workforce factory,” a technology and innovation development center (the authors of the article gave the example of Professional Ranking of World Universities, but we would definitely add the Forbes ranking here).

  • Model 4. Ranking based on the ideal of the university as a “social elevator” and center for socially significant projects (ranking of the Washington Monthly magazine).

Kupriyanova (2015), also Malishko and Yaremenko (2016) provided an overview of the methodology of the most common global university rankings. Vertakova et al. (2017) proposed conducting KPI monitoring of university performance based on indicators of global university rankings. Based on benchmarking, Antyukhova (2020) concluded that the QS ranking is closest to “reference” investment comparison procedures. The author argued that distortions in the ranking, regardless of the reasons for their occurrence, can be eliminated by increasing the significance not of the ranking itself but of the information posted by the university in open sources.

Bolsherotov (2020) provided an overview of three global university rankings (ARWU, THE, and QS) and analyzed case studies of successful universities. In particular, the author listed the California Institute of Technology and the role of investment by Chinese companies in promoting national universities in international rankings. A high indicator of the academic reputation of universities, on the one hand, is one of the basic indicators of the position of a university in the global market of educational services, but on the other hand, it acts as a kind of stigma for leading universities, reducing the need for innovation and improvement of the educational process (Antonova and Sushchenko, 2020). When forming or adjusting strategic development programs, universities should focus on achieving leading positions not only in institutional rankings but also in subject and industry rankings.

Sineva and Tryapitsyn (2021) conducted a statistical analysis of QS ranking results using correlation analysis, cluster analysis, and principal component analysis (PCA) methods. The authors showed that for different groups of universities (e.g., classical and technical universities, large and small), statistical patterns would differ significantly; the profile of the university should determine the choice of a particular strategy. Blazhevich et al. (2021) proposed an approach to describe the results of competitive interaction of universities included in any world university ranking based on the solution of population dynamics equations (Lotka-Volterra equations). The phase variables in these equations are the integral index values of university rankings. The author’s approach consists of reducing this system to a system of independent Verhulst equations that have analytical solutions in exponentials with respect to time and passing them to stationary solutions represented by the values of the phase variables at the moment of time tending to infinity. With this approach and a given growth factor Overall (Total) Score, it is possible to uniquely find symmetric coefficients of interuniversity competition for no more than three competing universities, the use of which leads to a system of equations whose stationary solution represents the known values of the integral ranking index for the selected three universities, and as initial values—integral indicators of the previous ranking. Zhang (2021) also proposed a method for predicting the position of a university in a ranking based on time-series analysis using the ARWU ranking as an example. Ufimtseva and Begicheva (2021) developed a predictor model for university rankings based on the system dynamics method, which was tested in the Anylogic simulation environment.

Neudachin (2017) conducted a correlation analysis of five different rankings and found several relationships between ratings. In particular, the QS and THE rankings strongly correlated with each other. The author also noted that the correlation between the top 100 of different rankings was lower than that when using the full sample. The team of authors of Belarusian State University conducted a correlation analysis of the Moscow International Ranking Three University Missions, RUR (Round University RankingFootnote 4), QS, THE, and Webometrics (Gaisyonok et al. 2018). The results of the Three University Missions ranking moderately correlated with the results of the RUR ranking; the correlation with Webometrics was low. Correlations of QS with THE and Webometrics rankings were modest. The hypothesis regarding the systemic dependence of the average university position in the Webometrics ranking on the profile of the university was also positively tested (Gaisyonok et al. 2019).

The problem of university rankings is relevant not only for Russia but also for other countries in the post-Soviet space. Tsyuk (2016) conducted a comparative analysis between Ukrainian and global university rankings. Gaisyonok and Klishevich (2018) provided an overview of the methodology of global university rankings from the perspective of Belarusian universities’ participation.

Issues of promotion of national universities in global rankings

One of the main problems analyzed in Russian language scientific literature is the low position of Russian universities in global university rankings. Pugach and Zhukavskaya (2012) compared the THE and ARWU international rankings with the national ranking of Russian universities produced by the Interfax group of companies. The authors noted that the Russian ranking was focused more on the educational activities of the university, while international rankings focused on scienceFootnote 5. Simultaneously, the number of R&D employees at Russian universities declined significantly in the first half of the 1990s. Despite the positive trend in recent years, Russia is still inferior to the Soviet Union in terms of the number of researchers. The authors attributed the low positions of Russian universities in the global university rankings to this. Similar conclusions were drawn by Shestak and Shestak (2013), who prioritized the anglicization of scientific and educational activities in Russian universities. Among the existing problems, the authors also noted the outdated institutional forms that had existed since Soviet times, as well as the discrepancy between global and Russian research agendas. A hypothesis was proposed (Rodionov et al. 2013b) that the low positions of Russian universities are associated with a small number of articles in publications indexed in Web of Science, despite the fact that Russia is ahead of the United States and China in terms of the number of citations per 100,000 articles. Unfortunately, this approach does not allow us to analyze the underlying causes of the current situation.

Karpenko and Bershadskaya (2013) analyzed the dynamics of Russian universities in the Webometrics ranking (2007–2013). It should be noted that Russian universities are fairly well represented in this ranking (see also Kabakova (2015); Smyshlyaeva et al. (2016); Russia consistently ranks among the top 10 countries in terms of the number of universities. Saprykina (2018) highlighted that a university’s website had an indirect impact on the promotion of the university in global rankings through the formation of a positive image of the university.

An interesting hypothesis was put forward by Naidanov (2013), who linked the problems of Russian higher education with the orientation of national assessment systems towards processes, while global university rankings were focused on results. Zernov (2014) also analyzed the problems of developing the competitiveness of Russian higher education from the perspective of global university rankings. Tatochenko and Tatochenko (2014) linked the low positions of Russian universities in international rankings to a huge funding gap with leading countries. Islakaeva (2017) arrived at interesting conclusions by comparing the quantitative characteristics of Russian universities and various groups of universities in the QS ranking. The leading universities in the ranking have a significantly smaller number of students than Russian universities, including participants of the 5top100 project. At the same time, the proportion of graduate and undergraduate students at leading universities is much higher, and the total number of students per teacher is several times lower. Accordingly, instead of transforming existing universities, the author proposed creating universities in a “new format,” evenly distributing them throughout the country on the basis of agent-based modeling.

Ebzeeva and Smirnova (2022) analyzed the positions of Russian universities in global university rankings. The authors also compared the methodologies of the global and national rankings and highlighted the research orientation in the first case, while the latter tended to evaluate educational achievements. The article by Ebzeeva and Gishkaeva (2022) is devoted to the analysis of the methodology and positions of Russian universities in the ARWU ranking

Critique

Against the backdrop of the geopolitical situation and sanctions of Western countries, the position of developing its own ranking in Russia (and not only) is becoming increasingly widespread. Back in 2013, Bolsherotov (2013) pointed to the initially losing position of Russia in the “ranking game”. The author blamed global rankings for US bias in evaluation. Besides, the well-established tradition of publishing results in the national language, as well as the “removal” of the scientific sector from higher education, has an effect. The author sufficiently suggested not sending the data of Russian universities to international rankings and not taking them into account. In subsequent articles, Bolsherotov did not demonstrate such radical positions.

Lazar (2019) criticized the excessive bias towards quantitative performance indicators, the bureaucratization of science and higher education in general, and the use of rankings to assess the effectiveness (performance) of universities in particular. The article is more of an emotional journalistic than a scientific one. Nevertheless, the author made an important remark on the differences between the Russian and Western systems of science and higher education. In the West, science and higher education originally co-existed in universities and applied research developed through the active participation of businesses. In Russia, since the times of the USSR, there has been a “research triad”: the Academy of Sciences, industry design bureaus, and research institutes, while universities and industry institutes have been mainly engaged in training personnel with budget funding. Therefore, historically, both fundamental and applied research has largely moved “outside” universities. Over the past ten years, there have been several attempts to break this system and return science to universities (the reform of the Russian Academy of Sciences, the creation of consortiums of universities, the Academy of Sciences, and industry within the framework of the new Russian program of academic excellence “Priority 2030,” etc.), but a change in rooted institutional practice takes more time.

Global university rankings do not reflect the quality of education but the advantage of one educational system (Western) over all others (Pyatenko, 2019). Due to the limited number of bibliometric sources, there is an English-language bias in the assessment. Besides, the author noted that high scientific performance did not necessarily correspond to high education quality. Moreover, the rankings do not correlate with the indicators of demand for graduates by employers. We have not been able to find empirical evidence for the latter statement; we believe this is because if everyone (including employers) perceives a position in the rankings as an indicator of quality, it is not so important whether they really are. The well-known “Thomas theorem” is defined as follows: “if men define situations as real, they are real in their consequences” (Thomas, 1928, p. 572).

Lutsenko (2015) proposed a methodology for ranking universities based on automated system-cognitive analysisFootnote 6 and Eidos software. Shiryaev (2015), on the contrary, proposed creating a national ranking that would repeat the global rankings in key aspects but would include a larger number of national universities. A similar idea was proposed by Tatochenko (2016), who proposed to strengthen the ranking positions of universities through the publication activity of graduate students. Khalafyan et al. (2016) proposed a methodology for ranking universities based on a multidimensional metric space. The similarity (difference) of universities is determined by the distances between them; the smaller the distance, the greater the similarity. Vorobyov (2016) argued that the use of global rankings as benchmarks was inconsistent with the country-specific goals of the national economy. Sandler et al. (2019) noted that the use of a system of indicators characteristic of the international model of the university does not correspond to the role of the university as a driver of regional economic development.

Burtseva (2018) formulated the prospects for the development of national rating systems (see also Nikiforova and Burtseva, 2019). Grishankov (2018) emphasized that the Three University Missions ranking eliminated bias towards English-speaking countries, which is inherent in global rankings. Eskindarov (2022) considered it expedient to develop a country’s own ranking The Three University Missions with more active involvement of the Eurasian Economic Union (EAEU) and other CIS countries, which would meet the objectives of the development of the Union and the formation of decent human resources capable of realizing its potential in the interests of the development of their countries.

Discussion and conclusion

We have reviewed the major statements on responsible research evaluation with regard to global university rankings as well as the body of relevant Russian-language literature. The employed integrative methodology aims to fill the gap in the international knowledge base.

There is a consensus in basic documents on research assessment that global university rankings should not be used to evaluate research. Under the guise of apparent transparency and objectivity, there are a lot of shortcomings that negatively affect all stakeholders in the higher education ecosystem. Why are the rankings still actively used by universities and governments? We cannot answer this question right now, but we can put forward hypotheses that may be the subject of future research:

  • Neoliberal paradigm of higher education. With its spread, market relations have penetrated higher education, along with an increase in the number of students and public funds. Society and, more often, governments want to see a return on investment, while rankings create the illusion of a simple quantification of an organization’s performance. Unfortunately, the content (methodology) of the rankings and the impact of their indicators on public welfare are accepted uncritically.

  • Accountability movement. There is a global tendency for funders to require evaluations and report performance information to support their decisions about fund allocation. With an increase in the number of organizations assessed, there is a shortage of resources, and the assessment becomes more and more formal. From this point of view, the rankings appear very attractive as ready performance information.

  • Managerialism. Professional managers with a propensity for total control come to lead universities. In addition, managers often do not have sufficient scientific expertise; there is a need for simple metrics for evaluation and benchmarking. Rankings perfectly fit into the canvas.

It is interesting to note that all these phenomena existed already in the middle of the last century (Draper, 1964). Within the neoliberal paradigm of higher education, rankings are central to political excellence initiatives. Simultaneously, only the final position in the ranking plays a role; the methodology itself is most often accepted uncritically. Linking ranking position to funding raises the stakes and simultaneously increases the pressure on the academy (Gadd, 2019).

The vast majority of authors in Russian-language scientific literature consider university rankings in the discourse of competitiveness; accordingly, the university’s entry into one or another group of the global university ranking is taken as an indicator of competitiveness. High positions in the rankings allow the university to attract the best students and professors (including those from abroad) as well as receive additional funding through state excellence initiatives (e.g., Project 5top100). Positions in the rankings also increase the value of a diploma in the eyes of employers, although the latter thesis is questioned periodically. Based on this interpretation, rankings are not an indicator of competitiveness but are only marketing tools. The problem is not in the rankings themselves but in their misinterpretations and misapplications.

Also, in Russian-language literature, the concept of “world-class university” is often encountered, which is also viewed through the prism of the position of the university in the rankings. The concept of “frontier university” stands out from this series, but today it has not been sufficiently theoretically and empirically worked out. For example, Efimov and Lapteva (2017) argued that such universities operate at the forefront of new areas of knowledge, but this can be said of almost all leading universities in the world.

Among the reasons for the low positions of Russian universities in global university rankings, the authors in the literature that we reviewed identified the following:

  • A huge gap in funding between leading and Russian universities

  • Global rankings are more focused on science, while Russian universities are historically more focused on education

  • In Russia, the evaluation system focuses more on processes than on results

  • Low level of English language proficiency

  • The existence of obsolete institutional forms from Soviet times and the age composition of the professorship

We would certainly add here the fallen quality of school education.

Criticism of the rankings mainly comes from calls to replace global rankings with national rankings. We have found the point of Pyatenko (2019), who argues that global university rankings are not an indicator of quality; their task is to assert the superiority of the Western system of higher education. Besides, the performance of global university rankings may not be in line with national and regional development goals.

Heavy dependence on ranking indicators or quantitative measures, in general, raises a number of issues. First of all, applied sciences are context-dependent. Pressure to publish in Scopus/Web of Science titles leads to limited visibility and degradation of locally relevant research (on the issues of coverage, see, e.g., a recent study by Khanna et al. (2022). Despite the rosy reports of Russian universities moving up in the rankings and increasing their publications, it suddenly turned out that the country did not have enough technological competencies to produce car paint (Zlobin, 2023) or starter cultures for dairy products (Sukhorukova, 2023). From this point of view, the experience of participating in the 5top100 project of Ural Federal University, which simultaneously strengthened both global positions and ties with the regional industry, is interesting (Sandler et al. 2020). In parallel with the fulfillment of obligations to advance in the rankings under the 5top100 project, the university purposefully elaborated its contribution to the local economy through contract research, the creation of departments together with industrial partners, and participation in the projects of corporate universities (e.g.,, Ural Mining and Metallurgical Company). This approach is an example of a glocal development, but it is rather an exception.

The main problem concerns the validity of rankings in general. The majority of global university rankings, including THE, QS, ARWU, and U.S. News & World Report, are created by profit-driven organizations that generate income through the sale of additional data, consulting services, and subscription-based content to universities (Lim, 2021). Chirikov (2022) conducted an analysis of the ranking positions of 28 Russian universities that had engaged Quacquarelli Symonds for consulting services between 2016 and 2021. The findings revealed a peculiar rise in ranking positions that could not be justified by any observable changes in the universities’ characteristics as reported in national statistics (the author called this phenomenon “self-bias”). It should be noted here that almost all Russian universities have lost dozens of positions in the most recent edition of the QS ranking. Does it mean that Russian universities performed well before? Does it mean that there has been a significant deterioration in their activities recently? We believe that university rankings cannot give a valid answer to these questions.

The significance of rankings is solely based on the perception of significance by the key stakeholders. For instance, despite the current developments, the vice-rector of RUDN University (participant of the 5top100 and Priority-2030 projects), Yulia Ebzeeva, still believes that global university rankings measure the international prestige of the country (Ebzeeva and Smirnova, 2022). Moreover, the authors argue that the rankings are “based on the objective assessment by experts from different countries of the significant achievements of universities.”

Quite recently, the Universities of the Netherlands issued a report on the (negative) impact of global university rankings on higher education institutions (“Ranking the university: on the effects of rankings on the academic community and how to overcome them”, 2023). The rankings are hard to reject due to their marketing function. The report proposed a strategy to facilitate the desired cultural change, which entails the implementation of initiatives at three distinct levels. In the short term, universities should embark on individual initiatives. In the medium term, coordinated initiatives should be undertaken at the national level, such as collaborative efforts by all universities in the Netherlands. Lastly, in the long term, coordinated initiatives need to be established at the international level, for instance, at the European level. However, the change starts with each of us. Please, be aware that every time we say or post on social media something like “My university has advanced N positions in the ranking X”, this adds to the validity of rankings in the eyes of potential customers. Thus, every member of the academic community can start the movement towards rankings’ rejection with simple small steps.