Research Quality Assessment in Italy and classification of the A-class scientific Journals. Potential distorting effects

Research Quality Assessment in Italy and classification of the A-class scientific Journals. Potential distorting effects

Questo articolo indaga le diverse e molteplici ripercussioni della Valutazione della Qualità della Ricerca (VQR), che potrebbero influenzare la libertà dell’attività di ricerca scientifica, influenzandone metodi, oggetto, grado di approfondimento e canali di diffusione. Il contributo distingue tra gli effetti che influenzano le istituzioni e i dipartimenti nel loro insieme e gli effetti che influenzano direttamente gli autori dei prodotti di ricerca valutati. L’analisi si concentra poi in particolare sul rapporto tra i risultati della VQR e la classificazione delle riviste scientifiche di classe A in Italia, evidenziando i potenziali effetti distorsivi sia in termini di maggiore o minore accessibilità alle riviste di classe A da parte dei contributi da autori che non partecipano alla VQR, e in termini di temi di ricerca.


This paper investigates the further and multiple repercussions of the results of the Research Quality Assessment (VQR), which could affect the freedom of scientific research activity, influencing its methods, object, degree of depth and dissemination channels. The paper will distinguish between effects that affect institutions and departments as a whole, and effects that directly affect the authors of the research products evaluated. The analysis will then focus in particular on the relationship between the results of the VQR and the classification of the A-class scientific journals in Italy, highlighting the potential distorting effects both in terms of greater or lesser accessibility to the A-class journals by contributions from authors who do not participate in the VQR, and in terms of research topics.

1. The Research Quality Assessment in Italy

In Italy, as also happens in other countries[1], the scientific research carried out by universities and research institutes that receive public funding is subject to a process of evaluation[2].

In fact, it is since the end of the twentieth century that some assessment procedures have been introduced even within universities[3]. These assessments, starting from the end of the eighties[4], at first, focused only on the managerial aspects of the university activity. Only later, at the end of the nineties, some assessments focusing on the qualitative aspects of the research activity were introduced[5].

In 1998 the Committee for the Research Evaluation (CIVR)[6] was established. It was in charge of carrying out the first two research quality assessments in Italy, and it was subsequently replaced by ANVUR[7]. The latter, since its establishment, has dealt with – and still is involved in – the Evaluation of Research Quality (VQR)[8], introduced by the so-called “Gelmini Reform”[9].

Without going into the details of the assessment, given that it is not the subject of this paper, it is nevertheless interesting to refer to its modality in broad terms.

The latest evaluation, VQR 2011-2014, asked each university or research institute to present two research products for each member of their research staff (professors and researchers). The received products were assessed by sixteen Groups of experts for the Evaluation (GEV)[10], divided by scientific areas, through the peer review[11] methodology or through bibliometric analysis, depending on the area to which the evaluated paper belonged[12].

The evaluation identified originality[13], methodological rigor[14] and attested or potential impact[15] as indicators of the “quality” of scientific research, on the basis of which a qualitative level, among excellent, high, fair, acceptable, limited[16], was attributed to each research product evaluated [17].

The assessments obtained from each research product were then aggregated and revised, to calculate the overall quality profile of the institution, together with various indicators aimed at drawing up a proper “ranking” of the institutions involved, on the basis of which the reward share of the ordinary financing fund is distributed[18].

Today the fourth evaluation exercise[19], the VQR 2015-2019, is ongoing. It started in September 2020 and its results are expected for March 2022.

The research evaluation in Italy, as also emerges from a first analysis of the call that launched the 2015-2019 VQR, is in constant evolution, being far from reaching an almost definitive structure. In any case, the long-term goal of the VQR (and of any other research assessment process) is to elevate – over time – the quality of the research produced, also in relation to the investment made by the government for publicly funded research.

However, it cannot be ignored that the effect of this assessment is not limited only to affecting the allocation of a substantial part of the research ordinary fund.

In fact, even in the short-term and mid-term period, the repercussions and the concrete consequences (more or less directly) deriving from the result acquired in this assessment for the subjects involved (universities, departments, professors, researchers) are multiple and particularly relevant. Repercussions that also strongly impact the freedom of scientific research activity, influencing its methods, objects, degree of depth and dissemination channels. It is these secondary effects, and in particular the relationship between the results of the VQR and the classification of the A-class scientific journals in Italy that we move on to discuss.

2. VQR (economic) direct effects, which affect institutions and departments as a whole

It is well known, in Italy, that very concrete economic consequences derive from the positioning of a university within the Research Quality Assessment ranking.

As a matter of fact, a substantial part of the Ordinary Fund for university research is distributed on the basis of the results obtained in the VQR (Research Quality Assessment) evaluation[20].

In particular, the “reward quota” of the Fund, which will cover the 30% of the available resources, is divided among the universities for at least three fifths based on the results of the VQR and for at least one fifth based on the evaluation of recruitment carried out every five years by ANVUR[21]. The percentage attributed (also) on the basis of the results of the VQR then increases to 80% of the reward quota, with regards to the “Scuole Superiori” with special regulations.

To better understand the impact of this assessment, it is sufficient to consider that the reward quota of the Ordinary Financing Fund (FFO) for the year 2019 amounts to € 1,784,580,447, equal to approximately 26% of the total available resources. And that part of the FFO is assigned for 60% based on the results achieved in the Research Quality Assessment (VQR 2011-2014), for 20% based on the Assessment of recruitment policies for the three-year period 2016-2018, using in particular the data relating to VQR 2011-2014 and for the remaining 20% ​​based on the result indicators referred to in the ministerial decree relating to the general guidelines for the three-year period 2019-2021[22].

It should be emphasized that, in spite of the evaluation nominally being performed solely on “research quality”, if we observe the calculation basis provided for by the Ministerial Decree, it is clear that any measured[23] “quality” affects the evaluation only to a limited extent[24].

Furthermore, it is always on the basis of the results of the VQR that the “departments of excellence” are identified, to which additional funds are distributed.

In fact, with the 2016 financial law, the MIUR requested ANVUR, on the basis of the results obtained from the latest research quality assessment (VQR), to define the calculation of a specific “standardized indicator of departmental performance” (ISPD), which takes into account the position of the Departments in the national distribution of VQR, in their respective scientific-disciplinary sectors and the attribution to each of the Departments of the relative ISPD[25]. In turn, this indicator is strictly connected with the VQR evaluation of each department.

Only the State Universities to which the Departments classified in the first 350 positions of the aforementioned ranking belong, have been able to apply for funding. The applications presented were assessed by a special commission that selected the 180 departments awarded the 2018-2020 funding.

These are particularly large sums: The Fund for the financing of university departments of excellence consists of € 271,000,000 for each of the five years of funding, for a total amount of € 1,355,000,000. Such funds can concretely make a difference on the research carried out by the individual institutions.

However, although the above-described economic effects are the most direct and impactful, the VQR evaluation entails further consequences, perhaps more indirect but, in my opinion, equally interesting and worthy of further study.

3. VQR indirect effects that affect researchers and their research activity

As mentioned, the evaluation of university research does not produce effects only in terms of allocation of available resources, but it is now clear that the results of the Research Quality Assessment, could affect also – and in particular – the freedom of scientific research activity[26], influencing its methods[27], objects, degree of depth and dissemination channels.

These consequences affect each researcher and the entire research community. As well as – even if more indirectly – society as a whole[28].

It should be borne in mind that if, on the one hand, the Research Quality Assessment in Italy is functional to a weighted and efficient allocation of resources, on the other hand, it wishes to encourage the pursuit of a further aim, which is anything but secondary: to push the public research system to a continuous qualitative improvement.

However, we must also be aware that the use of centralized metrics, such as those used in VQR, can lead to the development of dynamics that risk leading, paradoxically, to the opposite of the expected result[29].

It is also of particular interest to stress that the VQR, although aimed at evaluating institutions and departments, but not individual researchers, is precisely carried out through the evaluation of the research outputs produced by the latter[30].

But even if technically the evaluation does not concern the quality of the research of the individual researcher, the latter can suffer some of the effects as well. Hence, it is appropriate to distinguish between the effects that affect institutions and departments as a whole, and those that affect the authors of the research products that are evaluated.

For example, it should be underlined that some universities have prevented researchers, who refused to submit their research products to the VQR[31], from participating in some competitive calls for the allocation of research funds[32]. That decision was brought to the attention of the Italian Administrative Judge[33], who ascertained that such a clause was not legitimate. This case, albeit pathological, is an example of the repercussions of the VQR directly affecting researchers.

The introduction of the VQR, together with the introduction of the ASN (Abilitazione Scientifica Nazionale – National Scientific Qualification) to access the roles of associate and full Professor, has exerted pressure on the scientific work of researchers, due to a more and more competitive environment regarding personal careers, affecting also the choice of dissemination channels[34]. The various assessments that affect the university world and therefore that those who undertake an academic career must pay attention to are often interlaced, with effects that are not always immediately perceptible but still relevant.

The fact that only the products published through certain channels (e.g., scientific journals) are evaluated, is a sufficient reason to ensure that these channels are preferred to others which, paradoxically, would be the most effective tools of dissemination of scientific knowledge, but which are not used precisely because they are not recognized by the ANVUR evaluation criteria.

In addition, the incentives currently in force, which lead to the publication of research products in A-class journals or in journals with the highest possible impact factor[35], have direct repercussions on the research topics investigated in universities. After all, it is very likely that scientific journals promoting the most innovative – or at least the newest – scientific issues will not have a high impact factor from the start.

The paradoxical achievement is that of discouraging research on new issues as they would not find a sufficiently prestigious collocation, and that would penalize the researchers in the pursuit of their careers compared to colleagues who are dedicated to more popular themes, which have a sufficiently prestigious editorial collocation. This also creates the phenomenon of “trend topics”, that is of particularly “trendy” research topics that ensure that these topics continue to be treated even when they are completely intellectually saturated.

Referring to the last Research Quality Assessment, distorting effects were also detected by the Head of the Department for Higher Education and Research of the Ministry of University Education and Research[36]. The latter, in a communication sent to the Rectors of Italian universities at the beginning of 2019, noted that “in bibliometric areas, the current evaluation system has sometimes led to publications that are useful only for the purpose of the results of the algorithms or for overcoming pre-established thresholds, as if all the rest of the activity was useless, reducing the possibility of development of small areas and interdisciplinary ones; in non-bibliometric areas, it forced researchers to publish in journals identified by ANVUR itself as “A-class Journals” (for the purposes of the VQR) requiring, among other things, to publish only in those of one’s own area, thus denying the much desired multi and interdisciplinarity, when it would have been enough to indicate some parameters to identify the scientific-level journal.”

But there is more.

4. Class A Scientific Journals and the VQR evaluation

Among the effects that affect the researchers, of particular interest is the relationship between the classification of Class A Scientific Journals and the VQR evaluation obtained by the articles therein published, together with the potential distorting effects both in terms of greater or lesser accessibility to the A-class journals by contributions from authors who do not participate in the VQR, and in terms of research topics.

As a matter of fact, according to the Ministerial Decree 120/2016 and to the regulation released by ANVUR for the classification of scientific journals in non-bibliometric areas, there is a strong connection between the evaluation obtained in the VQR by the articles therein published and the possibility of being included or staying in the category of A-class journals.

The aforementioned Ministerial Decree states that: «For the purpose of classifying the journals in Class A, within those that adopt peer review, ANVUR verifies, with respect to the characteristics of the competition sector, the satisfaction of at least one of the following criteria: a) quality of scientific products achieved in the VQR by the contributions published in the journal; b) significant impact of scientific production, where appropriate»[37].

And the aforementioned Regulation states that «the requirement set out in point 5, letter a) of Annex D of the Ministerial Decree of 7 June 2016, no. 120 is considered satisfied if the articles submitted to the last VQR have obtained a number and a share of excellent and high evaluations above the average ones of the Class A journals of the Area or Sectors of reference for which they have been subjected to product evaluation»[38].

The same Regulation states also that «journals whose articles, subjected to evaluation in the last VQR, have obtained a quota of excellent or high evaluations at least equal to the average of the Class A journals of the Area or Sectors of reference, are not subjected to the five-year control for maintaining the classification of reference»[39].

It is quite clear that the criteria identified as necessary to define a scientific journal as Class A carry with them the risk of a distorting effect that is anything but secondary: they encourage journals to publish contributions by tenured staff, who, therefore, can submit their own to the VQR assessment instead of those by non-tenured staff. The latter are thus excluded from this assessment.

Or, again, the particular correlation between the VQR and the classification of scientific journals – in a pathological scenario – could also influence the evaluation of anonymous reviewers for whom it may not be indifferent to evaluate some contributions more or less positively in consideration of the journals hosting them.

But there is more, considering that being published in Class A Journals is not irrelevant at all in terms of career advancement. As part of the procedures for the scientific qualification to the functions of university professor[40] (ASN), it contributes to the determination of the sectoral medians relating to one of the three indicators of scientific productivity, namely the one relating, in non-bibliometric sectors, to the number of articles published in class A journals[41]. It also serves to qualify professors who can be part of the various national commissions for the granting of the aforementioned scientific habilitation (ASN)[42]. It is also functional to the qualification of the professors to be included in the PhD boards of Italian universities, which must be composed of professors who have published a minimum number of papers in class A journals [43].

Hence, it becomes increasingly evident how profoundly – albeit indirectly, through the qualification of class A scientific journals – the VQR Evaluation has an impact.

5. Final remarks

Even if assessing the quality of scientific research is certainly very complex – but still essential[44], considering that it is required by specific regulatory obligations – an evaluation exercise could potentially be a precious organizational tool in order to verify the return on investments (ROI) made and in order to plan future investments based on data. This assumes that the investment in research could be evaluated, even only hypothetically, as an investment of purely economic nature, and not, as one might not unreasonably object, as a cultural, social, even ethical investment and, moreover, as a fulfillment of a constitutional duty[45], even before a moral one.

It should be noted, however, that the objectives that the VQR assessment should achieve – first of all to push the public research system to a continuous qualitative improvement – they still seem far.

But it has to be taken in account that VQR is a tool in the making, which also evolves in consideration of the solicitations coming from the academic world and which – perhaps – deserve to be more and more structurally collected, without affecting the impartiality of the evaluation. Hence it seems essential to rethink the evaluation assessment, taking into account the numerous distorting effects that derive from it, along with the other different assessments that are interlaced.

For example, also considering what has been highlighted on the subject of class A journals, it would be worthwhile to further investigate the interdependence and weight that is attributed to VQR according to the different contexts in which it is relevant.

As a matter of fact, if the VQR assessment is strongly valued when it comes to the definition of journals as class A, the same cannot be said in relation to the competitive procedures for the assignment of a researcher/professor tenure or in relation to the ASN (national scientific qualification). In the latter cases, the jurisprudence[46] seems solid in affirming the irrelevance of the VQR assessment even if precisely the assessed research products have to be taken into account in the competitive evaluation or in the ASN evaluation. Because – as remarked by the Italian administrative judge – the VQR assessment is specifically aimed at evaluating the institution and not the researcher. This motivation, in the author’s opinion, is not entirely convincing, because it could be objected that also the “quality” of scientific journals could not be judged on the basis of the VQR, thus revealing the contradictions into which the university evaluation system sometimes falls.[47]

  1. E.g. in the UK first with RAE (Research Assessment Exercise) then with REF (Research Excellence Framework), in Australia with ERA (Excellence in Research for Australia), in Portugal with FCT (Evaluation of the Portuguese Foundation for Science and Technology), in France with Hcéres (Higher Council of Evaluation of Higher Education and Research).
  2. The optimistic reflection, from a sociological point of view, on the reasons that led to the need to evaluate and measure the activity of the public administration carried out by Palumbo is interesting: «At least in Italy, the prevailing reason for the success of the evaluation does not seem attributable to this factor [the economic crisis and the need to better allocate the few resources]..” to argue that “the fewer resources there are, the more you evaluate» appears simplistic and also empirically groundless. I rather like to think that waste and inefficiencies are less tolerable today not so much because fewer resources are available to waste, but because, on the one hand, citizens are more aware of the fact that the resources “come from their pockets” and therefore are more sensitive to their use. And, on the other hand, because even the operators of the public administration often manifest a sincere interest in optimizing the use of the resources they govern and in enhancing the commitment they dedicate to their activities”, see M. Palumbo, Il processo di valutazione. Decidere, programmare, valutare, Milano, 2002.
  3. On this topic, see C. Franchini, Brevi considerazioni sul processo di valutazione nelle università, in G. Colombini (a cura di), Finanziamento, competizione ed accountability nel governo dell’universitàCriticità del sistema e incertezze per il futuro, vol. III, Napoli, 2014
  4. The first reference to the concept of evaluation in relation to university research is due to the law establishing the Ministry of Universities and Scientific and Technological Research, l. n. 168 of 9 May 1989. Law that introduced the need for each university to provide for «forms of internal control on the efficiency of the university’s overall management results» (see art. 7, c. 8, l. 168/1989).

    Subsequently, law 24 December 1993, n. 537 introduced the Internal Evaluation Units for universities and the Observatory for the evaluation of the university system.

  5. For a more thorough analysis of the historical evolution of the University Assessments, G. Rebora, Venti anni dopo. Il percorso della valutazione dell’università in Italia e alcune proposte per il futuro, in Liuc Papers n. 257, Serie Economia aziendale, n. 38, 2012.
  6. The Committee for the Research Evaluation (CIVR), established by Legislative Decree no. 204 of 1998, dealt with the first research evaluation exercise in Italy: The Triennial Research Evaluation, relating to the period 2001-2003, and the five-year Research Evaluation, relating to the period 2004-2008, but the latter was not carried out, pending the establishment of ANVUR.

    Instead, the introduction of an internal university evaluation system and the institution of the National Committee for the Evaluation of the University System (CNVSU), in order to externally evaluate the internal evaluation of each university, together with the allocation of part of the ordinary funding fund in relation to the results of the evaluation activity is to be attributed to Law 370 of 19 October 1999.

  7. ANVUR stands for “National Agency for the Evaluation of Universities and Research Institutes”.
  8. ANVUR has completed two research evaluation exercises in Italy: the VQR 2004-2010 and the VQR 2011-2014.
  9. Law 30 December 2010, n. 240: “Regulations on the organization of universities, academic staff and recruitment, as well as delegation to the Government to encourage the quality and efficiency of the university system.”
  10. VQR 2011-2014 was conducted with 16 GEVs or Panels (436 members in total). The 16 panels of experts in the disciplines of scientific areas, appointed by ANVUR handled the evaluation of the research outputs submitted by the Institutes.
  11. In VQR 2011-2014, 12,731 peer reviewers participated in the assessment of 90,700 evaluations out of a total of 118,036 research outputs which were identified. The process of peer-review required two members of the GEV panel to select one reviewer each responsible for the evaluation. Each peer reviewer indicated for each of the three criteria an annotation and proposed a class of merit. The class of merit was then confirmed (or rejected or modified) by the two members of the GEV who nominated the two peer reviewers. When these two evaluations led to a consensus (non-disparity of merit classes), they were endorsed and validated by the GEV’s sub-panel before being approved by the GEV’s panel.
  12. The 2011-2014 VQR evaluation process included a bibliometric evaluation for eleven of the sixteen panels and a peer evaluation for the other five panels. For the VQR2011-2014, each of the 436 members of the GEV’s panels had to validate on average 270 evaluations (118,036 submitted research output for 436 members). The 291 members affiliated to non-bibliometric GEV Panels were asked to validate on average 295 assessments each (85,978/291).
  13. By originality we mean «the level at which the product introduces a new way of thinking in relation to the scientific object of research», point 2.6.1. of VQR 2011-2014 and art. 7, c. 8, lett. a), VQR 2015-2019 call.
  14. By methodological rigor we mean «the level at which the product clearly presents the objectives of the research and the state of the art in the literature, adopts a methodology appropriate to the object of the research and demonstrates that the objectives have been achieved», point 2.6.1. of VQR 2011-2014 and art. 7, c. 8, lett. b), VQR 2015-2019 call.
  15. By attested or potential impact in the international scientific community of reference, we mean «the level at which the product has exercised, or is likely to exercise in the future, a theoretical and / or applicative influence on this community, also based on the ability to comply with international research quality standards» point 2.6.1. of VQR 2011-2014 call. While with reference to the VQR 2015-2019, this requirement was defined as «the level at which the product exerts, or is likely to exert, an influence on the international scientific community or, for disciplines where it is appropriate, on the national one» Art. 7, c. 8, lett. c), VQR 2015-2019 call.
  16. The score assigned to each product was revealed only to its authors.
  17. As was pointed out by a group of international experts who were asked for an opinion on the evaluation that has just ended, «A very special feature of VQR is that several key aspects of the assessment methods (definition of panels, weights attached to merit classes), the definition of indicators, and their role in funding formulas are defined by laws and decrees, leaving little room for technical discretion» (report available on the ANVUR website).
  18. For a general introduction on the evolution of public research funding and on the consequent need to evaluate results see A. Banfi, Impatto nocivo. La valutazione quantitativa della ricerca e i possibili rimedi, in Rivista trimestrale di Diritto Pubblico, fasc. 2, 2014, 361.
  19. The fourth evaluation exercise started with decree no. 9 of 25 September 2020 by ANVUR, which shows that this evaluation is nothing more than an evolution of the evaluation just completed.
  20. The d.l. 10 November 2008, n. 180 introduced the reward quota in the research ordinary fund, in order to promote the qualitative increase of the activities of state universities and in order to improve the efficiency and effectiveness in the use of resources. With regards to the methods of attribution the Law Decree 21 June 2013, n. 69 provided that the fund attributed according to reward criteria was at least equal to 16% of the ordinary fund with an increase of 2% for each subsequent year, until reaching the 30% share of the fund destined to finance the research.
  21. Ex art. 13, paragraph 1, letter b) of the law of 30 December 2010, n. 240.
  22. Ex art. 3 of the D.M. n.738, 8 August 2019.
  23. As was highlighted by a group of experts who were asked to comment on the effectiveness of the 2014-2019 VQR, «The three criteria used for the peer-review process applied within VQR2011-2014 probably do not fully reflect the qualities required for a publication of excellence. The third criterion (‘attested or potential impact’), where the research has been exerted, or is likely to be exerted in the future, has a theoretical and/or applied influence.

    Such a criterion brings together two rather different criteria that are difficult to evaluate objectively. The adjective ‘attested’ seems to refer to quantitative or bibliometric values, whereas the ‘potential’ character of an impact seems difficult to evaluate, and introduces a degree of subjectivity that can hinder the understanding and acceptance by the scientific community of the evaluation process. In short, we see a degree of contradiction of terms herein».

  24. Annex 1 of the Ministerial Decree 738, 8 August 2019 containing the criteria and indicators for the distribution of the FFO reward quota, states that 60% of the same is distributed on the basis of the Results of the VQR 2011-2014 in relation to the final University indicator (IRFS), based on the following criterion: IRFS = (85% IRAS1 x Ka + 7,5% x IRAS3 + 7,5%x IRAS4). Where IRAS 1= Qualitative-quantitative indicator of the expected research products of the university; IRAS 3= Indicator related to national and international competitive funding for research; IRAS 4 = number of doctoral students, enrolled in specialization schools in the medical and health area, research fellows, post-doc fellows.
  25. ISPD will be obtained from the only direct comparison between possible departments with the same disciplinary composition and this comparison will be made in terms of the degree of success in the last VQR.
  26. On the subject of university autonomy and scientific freedom, see C. Pinelli, L’autonomia universitaria, relazione alla “Giornata di Studio autonomia universitaria e rappresentanza delle comunità accademiche, dei saperi e delle discipline”, Roma, 19 settembre 2011; A. Orsi Battaglini, Libertà scientifica, libertà accademica e valori costituzionali, in Nuove dimensioni dei diritti di libertà. Scritti in onore di Paolo Barile, Padova, Cedam, 1990; S. Fois, Intervento, in Associazione italiana dei costituzionalisti, L’autonomia universitaria. Bologna, 25-26 novembre 1988, Padova, Cedam, 1990, 74 ss.; E. Spagna Musso, Lo Stato di cultura nella Costituzione italiana (1961), in Scritti di diritto costituzionale, I, Milano, Giuffré, 2008, 463 ss.; L. Chieffi, Ricerca scientifica e tutela della persona. Bioetica e garanzie costituzionali, ESI, Napoli, 1993; G. Endrici, Poteri pubblici e ricerca scientifica. L’azione di governo, il Mulino, Bologna, 1991, 186; F. Merloni, Ricerca scientifica (organizzazione), in Enc.dir., XL, Giuffré, Milano, 1989, 399; F. Merloni, Libertà della scienza e della ricerca, in Diritto Pubblico, n. 3, 2016.
  27. See A. Banfi, Impatto nocivo. La valutazione quantitativa della ricerca e i possibili rimedi, in Rivista Trimestrale di Diritto Pubblico, n. 2,2014, in which the Author traces the opportunistic behaviors that can occur in the research activity such as plagiarism, self-plagiarism and the phenomenon of the so-called salami slicing.
  28. In the context of a global society in which competition is based on the knowledge economy, the university plays a decisive role in the construction of value and the economic and social revitalization of the area in which it operates.
  29. See M.R. Donaldson, S. J. Cooke, Scientific publications: Moving beyond quality and quantity toward influence, in BioScience, 2013; D. Sarewitz, The pressure to publish pushes down quality, in Nature, 533(7602), 2016.

    Regarding the relationship between quantity and quality of research products, the same seems prima facie to be inversely proportional. Several recent studies, in fact, believe that a very high production of scientific contributions often brings with it a physiological lowering of quality. See, ex multis, Z. Rubin, On Measuring Productivity by the Length on One’s Vita, in Personality and Social Psychology Bulletin, 4(2), 1978; J. Fischer, E.G. Ritchie, J. Hanspach, Academia’s obsession with quantity, in Trends in ecology & evolution 27(9), 2012; P. Halme, A. Komonen, O. Huitu, Solutions to replace quantity with quality in science, in Trends in Ecology & Evolution 27(11), 2012.

  30. The subject of discussion was whether or not the University could submit the products of the researchers and teaching staff belonging to their roles ex officio, in the event that they did not want to submit them spontaneously. Some universities, when this happened, submitted themselves the research products of their staff. This uncertainty seems to have been overcome, considering that the new VQR 2015-2019 has clearly foreseen that «The Institution confers the products with reference to each Department or similar structure, taking into account the choices proposed by each of them. The Department or similar structure selects the products, also taking into account what is proposed by the researchers» (art. 6, c. 1, VQR 2015-2019 call).
  31. Between the end of 2015 and the beginning of 2016, there was a protest against salary blocks, which affected professors and researchers, hence several researchers refused to submit their research products to the VQR 2011-2014.

    This shows that, since institutions and researchers maintain the database and select which publications should be submitted for evaluation, their cooperation is very much needed, and essential to the evaluation procedure itself.

  32. The University of Sassari had published a public call which expressly provided that «the submission of projects by the proposers […] is subordinated to the participation in the exercise of the 2011-2014 VQR» under penalty of exclusion.
  33. See a E. Furiosi, La valutazione della qualità della ricerca (vqr) al vaglio del giudice amministrativo, in Foro Amministrativo, anno V, fasc. VI, 2018, in which the judgment of the Sardinian Regional Administrative Court, section I, 10 January 2018, n. 10, was commented.
  34. See J.J. Heckman, S. Moktan, Publishing and Promotion in Economics: the Tyranny of the Top Five, Working Paper No. 82 Institute for New Economic thinking, 2018.
  35. For a specific analysis of the possible distorting effects of the evaluation through bibliometric indexes, among the copious doctrine, we note in particular: Research Evaluation Metrics, Parigi, United Nations Educational, Scientific and Cultural Organization (UNESCO), 2015; T. Braun, W. Glanzel, A. Schubert, Scientometric indicators. A Comparative Evaluation of Publishing Performance and Citation Impact, World Scientific Publishing Co. Pte. Ltd., 1985; P.P. Seglen, Why the impact factor of journals should not be used for evaluating research, Br. Med. J., 1997; E. Garfield, Citation Analysis as a tool in journal evaluation, in Science, 1972; J. King, A review of bibliometric and other science measures: a concise review, in J. Inf. Sci., 14, 1988.
  36. Prof. Giuseppe Valditara.
  37. Annex D) to the Ministerial Decree of 7 June 2016, n. 120
  38. Ex art. 13, c. 3, of the Regulation for the classification of journals in non-bibliometric areas, approved by ANVUR with the Resolution of the Board of Directors n. 42 of 20/02/2019, which contains the criteria for the classification of journals for the purposes of National Scientific Qualification (ASN).
  39. Ex art. 5 of the Regulation for the classification of journals in non-bibliometric areas, approved by ANVUR with the Resolution of the Board of Directors n. 42 of 20/02/2019.
  40. Ex art. 16 L. n. 240/2010 and ex Ministerial Decree n. 120 of 2016.
  41. The other two indicators concern, respectively, books and monographs, and the total number of articles and contributions to volumes.
  42. Ex D.M. n. 120/2016, attached E, point 2, lett. b).
  43. T.A.R. Lazio Roma Sez. III, Sent., (ud. 29-01-2020) 15-04-2020, n. 3939.
  44. Regarding the centrality and indispensability of a research evaluation, see F. Merloni, Autonomia, responsabilità, valutazione nella disciplina delle università e degli enti di ricerca non strumentale, in Dir. pubbl., 2004, 609 where it is stated that «There is no teaching and non-instrumental scientific research without autonomy, there is no autonomy without responsibility, there is no responsibility without an evaluation of the results achieved».

    In support of the need for an evaluation of university research, as opposed to the co-option method, see also A. Sandulli, Spunti di riflessione sulla valutazione della ricerca universitaria, in G. Della Cananea, C. Franchini (curated by), Concorrenza e merito nelle università. Problemi, prospettive e proposte, Torino, 2009, in which it is stated that «the introduction of a system of evaluation of scientific research, carried out by peer reviewer but passing through a preventive and shared definition of qualitative and quantitative evaluation parameters, seems necessary and no longer deferrable». This idea is also supported in D. Gottardi, Il processo di valutazione come obbligo e come opportunità, in Lavoro nelle Pubbliche Amministrazioni, fasc. 3-4, 2016, 353.

  45. From a constitutional point of view about freedom of scientific research, see: U. Pototschnig, Insegnamento (libertà di), in Enc. dir., vol. XXI, Milano, 1971; Robertson J. A., The scientist’s right to research: a constitutional analysis, in Cal. L. Rev., 1977-1978; A. Santosuosso, V. Sellaroli, E. Fabio, What constitutional protection for freedom of scientific research?, in J. Med. Ethics, Jun 2007; G. De Cesare, L’organizzazione della ricerca scientifica: aspetti problematici e organizzativi, in Riv. it. sc. giur., 1969; S. Labriola, Libertà di scienza e promozione della ricerca, Padova, Cedam, 1979; L. Chieffi, Ricerca scientifica e tutela della persona, Napoli, 1993; A. Romano, Una strategia democratica per l’autonomia della ricerca scientifica, in Jus, 1997, fasc. 3, pp. 457-464.
  46. It was recently judged, with reference to a competition for a tenured position, that the VQR criteria are «intended for the evaluation of institutions. The underlying logic is completely different and cannot be exported to other proceedings, in particular it is not referable to individual papers examined for competition purposes» (T.A.R. Lombardia Milano Sez. III, Sent., 10-02-2020, n. 278). This principle was also reaffirmed with reference to the national scientific qualification: «It cannot be accepted … the applicant’s assumption on the positive judgment that he should have obtained from a previous evaluation of a research product in the VQR, as this procedure takes place in a completely different context and follows completely independent rules with respect to those that regulate the national scientific qualification sector». (T.A.R. Lazio Roma Sez. III, Sent., 27-06-2019, n. 8391).
  47. See A. Sandulli, La classificazione delle riviste scientifiche e la revisione tra dispari, in Giornale Dir. Amm., 2017, 4, 436, where the Author, in addition to the possible distortion effect just mentioned, highlights the further problem of judging the quality of scientific journals on the basis of peer reviews that are formulated to measure the quality of structures, thus also underlying the non-overlapping aspects of the two evaluations.

Emanuela Furiosi

Lawyer, PhD in Administrative Law at the University of Milan. Co-founder of bookabook.it