logotipo twitter link

Science Policy Evaluation

The ability to identify and support high quality science is undermined by the lack of a solid empirical infrastructure on which to base investments.  Science policy debates generally lack an evidence-based analysis of the likely merits of different investments, due to the absence of a mechanism by which to evaluate investment outcomes. This weakness in policymaking has received greater attention in Europe over recent years.

Research Framework Programmes

Policy Context

The monitoring of the implementation of EU Research Framework Programmes (FPs) is an essential component of the overall evaluation and monitoring system. It supports the management of the programmes, provides transparency on programme activities and contributes towards the information base used for major evaluations of the FPs.

From 1995 to 2006 the system was based on two evaluation exercises, an annual monitoring of FP implementation and, before each new FP proposal, a five year assessment of the implementation and achievements of research carried out over the preceding programme.

With the completion of the FP6 in 2006, the Seventh Framework Programme (FP7) was enacted to cover the 2007-2013 period. At the start of FP7 there were important changes to the exercises and the system overall for FP-level evaluation and monitoring.

The key features of the FP7 monitoring system--in addition to a 2010 interim evaluation, and an ex-post evaluation of FP6, which was published in 2009--include:

• Annual monitoring that is carried out internally by the Commission service, the primary aim of which is to provide a reliable source of systematically collected information to support FP management;
• Quantitative indicators are used extensively wherever possible, including the creation of "outcome" indicators;
• The indicators feature a synthetic and comprehensive picture of FP activities and of sensitive or politically important issues;
• Monitoring activities are based, wherever possible, on existing information collected systematically by the Commission, and also incorporates a survey of stakeholder opinion;
• It covers comprehensively both Framework Programmes (EE and Euratom) except the direct research actions carried out by the Joint Research Centre (JRC).

FP7 Evaluations

In November 2010, the Belgian EU Council Presidency presented the mid-term review of FP7, carried out by a group of independent experts, which emphasised the need for FP reviews to produce evidence for subsequent programmes. More recently, EU Finance Ministers, keen to get public spending under control and plug fiscal deficits, gave political agreement to the inclusion of the 3% target in the 2020 strategy on the condition that "outcome-orientated" measures for R&D and Innovation were established.

Focusing on the issue of “quantitative indicators” of innovation, the 2008 EU Court of Auditor’s Special Report concerning the evaluation of all of the EU RTD framework programmes identified a problem of attribution of results, of measurement, of aggregation of results and, finally, a problem of inadequate progress indicators. The  8 June 2011 interim evaluation report on FP7 also identified the need to decrease financial error rate by simplifying and optimising RTD management and evidence-based policymaking.

A New Monitoring System

Following this Special Report, the Commission has agreed on an updated EU research framework and the management of EU RTD activities. This new system is simplified and based on a single set of indicators—such as fast growing and innovative firms—of progress in EU research and innovation activities as well as a clearer presentation of evaluation criteria following the principles outlined in each specific programme.

The monitoring and evaluation system of Horizon 2020 will integrate some elements from the FP7 to enhance its relevance and impact, with a strong focus on thoughtput, output, results and impacts.

The 2011 Commission Staff Working Paper on the Impact Assessment accompanying the Horizon 2020 framework identifies four key principles of the new monitoring system:

The new system will be strategic with regards to preparation efforts during the pre-implementation of Horizon 2020. A comprehensive evaluation and monitoring strategy will be agreed upon by all actors involved in Horizon 2020 and its' specific programmes at the onset of the launch of the framework. "Horizon 2020 strategy lines" or timetables for specific evaluation work will also be clarified early on in the framework's implementation.

The new system will be comprehensive in its' extensive and differentiated levels of evaluation. This comprehensive principle will require annual monitoring from the Commission--with the assistance of independent experts--of all components of the implementation and management of Horizon 2020.

The new system will be coherent in its use of data. Specifically, available data will be used to calculate a series of common indicators to better implement and evaluate the programmes and activities under Horizon 2020.

The new system will be heavily evidence-based. At the centre of the Horizon 2020 evaluation and monitoring approach is a set of powerful data gathering and processing capacities with a focus on outputs, appropriate data archives and expert advice.

With these four indicators at the center of the system, the Horizon 2020 framework will develop the method of evidence collection, organisation and dissemination. Specific measures include more automated data collection mechanisms, a strong emphasis on the assessment of outputs and impacts of activities, an appropriate data archive, access to external expert advice, dedicated policy research activity and increased cooperation with Member States and Associated States.

The Evaluation System

A General Evaluation Infrastructure

The Research Framework Programme (FP) evaluation and monitoring has been progressively developed since its' introduction in the 1980s. It follows the principles laid down in the European Commission's Financial Regulation, which leads to a decentralised system of evaluation and monitoring where each Commission DG and service involved with FP management is also responsible for its own evaluation and monitoring work. For the FP as a whole, an evaluation and monitoring procedure is implemented and managed by a unit in DG RTD.

The European RTD Evaluation Network supports RTD evaluation training, information exchanges and best practice dissemination between evaluation experts in Members States and Associated States. The Interservices RTD network, which is internal to the European Commission, also serves to promote dialogue between national actors and all those involved with FP evaluation.

Evaluation under Horizon 2020

As of 2014, a new system of evaluation will be employed under Horizon 2020 based on a “comprehensive, harmonious” strategy.

As articulated in a Proposal for a Regulation of the European Parliament and of the Council on establishing Horizon 2020, the Commission recognises sound financial and effective performance management as essential to the optimal procurement and dissemination of research activities and projects. The performance indicators, baselines and appropriate coordination mechanisms articulated in Annex 1 of this Proposal will be put in place between the implementation and monitoring of Horizon 2020 (Regulation; 35).

These indicators will inform multiple steps of the Horizon 2020 evaluation process:

  • European Technical Platforms and Joint Programmes Initiatives and European Innovation Partnerships will inform the Commission’s annual monitoring of Horizon 2020. Specific programmes will extend to the activities of the European Institute of Innovation and Technology (EIT) and will include cross-cutting topics such as sustainable development and climate change;
  • No later than the end of 2017, the Commission will conduct an interim evaluation, with the assistance of independent experts. The EIT, set up by Regulation (EU) 294/2008, will monitor the Horizon 2020 goal “integrating the knowledge triangle” between research, innovation and technology;
  • The relevant EIT performance indicators are specified in Part IV of Annex I of this Regulation and will determine whether it will receive second-round funds;
  • No later than 2023, the Commission will offer the ex-post evaluation of Horizon 2020 and its' specific programmes. The ex-post evaluation will analyse in depth the rationale and implication of impact activities.

Member States

Member States will be expected to provide data and information directly to the Commission to permit the monitoring and evaluation of the measures concerned under Horizon 2020.

European Commission

The Commission will focus on strengthening implementing simplification efforts, steering, monitoring and evaluating programme implementation under Horizon 2020. This focus will be enhanced through the collaboration between different Directorates-General in implementation of programmes, policies and initiatives. 

Management of Evaluation Conclusions

Horizon 2020's horizontal programme committee configuration for the monitoring and evaluation of the Union's cooperation with key partner countries and regions (e.g. through multi-annual roadmaps for programmes) will provide policy and decisionmakers with a more sophisticated understanding of Europe's overall international cooperation framework.

The Commission will be responsible for communicating the conclusions of all the above Horizon 2020 evaluations. These communications and observations, with the input from independent experts, will be brought to the attention of the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions.

Dissemination of Outcomes

Horizon 2020 is expected to encourage robust, evidence-based system to support future Union policies. This evaluation system will be valorised through appropriate dissemination and reporting.The research and innovation activities funded under Horizon 2020 will be driven by citizen/consumer needs.

Supported by data archives and experts, Horizon 2020 should increase the cooperation with member states and associated states in disseminating and reporting outcomes. For instance, E-Horizon 2020, an IT platform, will help externalize and extend access to the Union’s research and innovation funding. 

Experts for Horizon 2020

To uphold a common set of standards for research and innovation activities, the European Commission is preparing a group of expert advisors from a wide spectrum of research fields. This group will contribute to the Horizon 2020 agenda and, therefore, to future EU funding programme for research and innovation. The experts of the advisory groups will provide high quality and timely advice for the preparation of the Horizon 2020 calls for project proposals.

Horizon 2020 funding will continue to be responsive to external advice of independent experts, specifically from the European Technology Platforms as well as Joint Programming.

The Commission is calling for expertise in the following areas:

-Future and Emerging Technologies (FET)

-Marie Skodowska-Curie actions on skills, training and career development

-Research infrastructures (including e-Infrastructures)

-Leadership in enabling and industrial technologies within the following areas: 

  • Information and Communication Technologies (ICT)
  • Nanotechnologies
  • Advanced materials
  • Biotechnology
  • Advanced manufacturing and processing
  • Space
  • Access to risk finance (debt or equity financing)
  • Innovation in small and medium-sized enterprises (SMEs)
  • Health, demographic change and wellbeing
  • Food security, sustainable agriculture, marine and maritime research and the bio-economy
  • Secure, clean and efficient energy (including nuclear energy)
  • Smart, green and integrated transport
  • Climate action, resource efficiency and raw materials
  • Inclusive, innovative and secure societies

ISC Involvement

STAR METRICS

In June 2011, in coordination with the US NSTC’s Science of Science Policy Interagency Group, ISC hosted the Rockefeller Foundation EU/US Science of Science Policy workshop in Bellagio, Italy. The goal of the workshop was to inform the development of EU - US collaboration in developing common theoretical and empirical infrastructures to describe and assess the outcomes of science investments to inform science policy making.

The backdrop of this event concerned the United States STAR METRICS program “Science and Technology for America’s Reinvestment: Measuring the Effects of Research on Innovation, Competitiveness and Science." STAR METRICS, led by an agency Consortium consisting of NIH, NSF and OSTP aims to create a database that will combine, in useful fashion, scientific investment data, initially from the NSF and the NIH, with data acquired from voluntarily participating research institutions.  The Consortium will use the data to generate various measures of the social, economic, and scientific value of the analyzed investments, using some of the metrics the NSF and the NIH established during the prior STAR pilot program.

Gobal Challenges

The EU Science: Global Challenges, Global Collaboration (ES:GC2) took place from the 4th through the 8th of March 2013 at the European Parliament in Brussels, Belgium. Attendees included members of industry, parliamentarians, policymakers and academics. A major theme of this event emphasised new and innovative methods of using scientific information to inform the highest level of policy decision-making. 

A standardised stream of communication between the policy and industry, invention and academic sectors has yet to materialise. On Thursday 7 March 2013, in conjunction with the ES:GC2 conference, Ms. Marlit Hayslett conducted a workshop on Communicating Science – How to Translate Science and Technical Issues for Policymakers . Ms. Hayslett is the founding director of Georgia Tech Research Institute’s Office of Policy Analysis and Research in Atlanta, Georgia (USA), which is recognised as the only applied research programme focused on state-level science, technology and innovation (STI) policy. Through her work at OPAR, Ms. Hayslett was able to enlighten the audience on the philosophy of communicating science and the tools and vocabulary required to translate academic and industry-level research work in emerging science and technology to state and federal-level policymakers.

The second half of the session was complemented by a panel discussion led by Ms. Agnes Kontar, Parliamentary Assistant to Bill Newton Dunn, MEP and Ms. Hanna Stenegren, Policy Adviser to Kent Johansson, MEP.

The rest of the five-day conference continued to explore science policy evaluation with a specific focus on science policy evaluation in light of adopting Horizon 2020 Framework Programme for Research and Innovation (2014-2020).

Measuring Innovation in Europe

Establishing a uniform set of evaluation and monitoring standards is crucial to accomplishing Europe 2020 initiatives. Since 2008, under the current programme’s (FP7) mandate, the European Commission has worked to develop a strengthened science base for policymaking through the realisation of the ERA. In addition to the monitoring activities of its JRC, the Monitoring Research and Policy Activities of Science in Society (MASIS) has been put in place to forge structural links in the implementation of the ERA between scientists, policymakers and the Union as a whole. This interaction between agents of EU RTD development is founded on the exchange of data, reports and evaluations.

Efforts to publically address the absence of  a standard measurement and vocabulary for innovation activities began in the 1990's, when Eurostat and the European Statistical Office joined forces to manage the EU Community Innovation Survey. Since then, the Oslo Manual, now in its third edition, has served as a common dictionary providing operational definition of innovation, the actors and activities involved, as well as implementation used in the Union's surveys.

Comparisons between countries, with regards to innovation, are made based on the most recent statistics and indicators from Eurostat. According to the European Parliament's Science and Technology Options Assessment (STOA), in addition to ERAWATCH, the EU has a wide spectrum of national and EU-level studies and surveys at its' disposal.

One such instrument, employed by the Commission, is the Innovation Union Scoreboard (IUS). Formally named the European Innovation Scoreboard (EIS) under the Lisbon Treaty, the IUS was introduced in its current form following the adoption of the Innovation Union Communication in October 2010.

The IUS is used to conduct annual comparative regional assessments of the innovation performance of Member States. This scoreboard identifies “enablers,” the drive of innovation performance, “firm activities,” R&D and non-R&D investments and “outputs,” the effects of firms’ activities. Headline indicators include R&D investment targets and a new “Innovation indicator” requested by the Council. Under the IUS, assessment groups are divided into innovation leaders, innovation followers and moderate innovators. This annual assessment also takes into consideration the rising RTD activities of BRICs countries to gage the EU’s industrial leadership in science and technology. 

Other instruments used to track and compare innovation on an EU level include INNO Metrics, the Research Innovation Scoreboard and the Innovation TrendChart.