In partnership with CultureHive, the AMA's knowledge hub

People and processes: who’s behind evaluations of UK Cities and Capitals of Culture?

By Emma McDowell, Centre for Cultural Value

At the start of any evaluation project it’s important to recognise the skills and teams you will need. In this rapid research review, we take a close look at who is involved in the complex, large-scale evaluations of UK Cities and Capitals of Culture.

How were teams structured, who did they collaborate with and how did they approach the challenge of objectivity whilst balancing the needs of multiple stakeholders?

Musician wearing a coat plays trumpet on a pebbly beach with the Humber Bridge in the background.
The Height of the Reeds, Opera North (Hull 2017) Photo: Tom Arber

People and processes: who’s behind evaluations of UK Cities and Capitals of Culture?

Introduction

Since 1990, the UK has hosted two European Capitals of Culture and three UK Cities of Culture. The evaluation activity from these events contains valuable knowledge to help us explore who conducts evaluation, the principles that underpin it, and how this can inform future policy and practice.

Evaluation teams for any large, city-wide event inevitably need to respond to the multiple needs and expectations of a range of stakeholders. The breadth of activity requires a broad range of evaluation and research skills and experience. Ideally, evaluation activity will build on and develop existing structures of institutions, organisations and practitioners across the city. This can then become embedded in ways of working long after the events are over.

Of course, it is difficult to directly compare the Cities and Capitals of Culture programmes and their evaluations. They occurred at different times, within different socio-economic contexts, and with varying levels of resource. Nevertheless, given their scale and breadth of ambition, they can offer rich learning on the specific opportunities and challenges for those considering evaluation projects of all shapes and sizes.

This resource considers how evaluation activity sits within the wider governance and delivery of the three UK Cities and two European Capitals of Culture programmes that took place in UK cities between 1990 and 2021. We refer to these programmes jointly as ‘UK Cities and Capitals of Culture’.

We explore who makes up the evaluation teams, and how they have used different ways of working to balance the needs of all those who have an interest in the outcomes.

We examine evaluation reports from the following events:

  • Glasgow European Capital of Culture, 1990
  • Liverpool European Capital of Culture, 2008
  • Derry-Londonderry UK City of Culture, 2013
  • Hull UK City of Culture, 2017
  • Coventry UK City of Culture, 2021
Map of the UK in block blue with white text. The following places are pinpointed on the map with a small white square - Glasgow European Capital of Culture, 1990, Liverpool European Capital of Culture, 2008, Derry-Londonderry UK City of Culture, 2013, Hull UK City of Culture, 2017. Coventry UK City of Culture, 2021
Map of UK Cities and Capitals and Capitals of Culture 1990-2021

Developing the capacity of delivery and evaluation teams is crucial, but this resource also explores how evaluations can use additional expertise from independent researchers, boards and steering groups. This can help to make sure that evaluations adhere to the Evaluation Principles of ‘many-voiced’ and ‘aware’.

You may wish to read this resource alongside our How are UK Cities and Capitals of Culture evaluated? resource, where we also draw out learning for practitioners, researchers and policymakers. The learning from evaluations of these cultural mega-events is transferable not just to those working on other large-scale or place-based activity, but to those working on evaluation projects of all shapes and sizes.

You can find key info about Cities and Capitals of Culture programmes on our FAQs web page.

Evaluation Principles
You can also find out how evaluations of UK Cities and Capitals of Culture can be viewed through the lens of the Evaluation Principles, a set of co-created principles to inform and guide evaluation practice. In this resource, we consider how the people-centred principle ‘many-voiced’ relates to the reports we looked at, and how it might be used in practice.

Key insights

We have reviewed the main reports and outputs relating to evaluation of UK Cities and Capitals of Culture from 1990-2021, and can draw out the following learning:

  • The most common delivery model for the five UK Cities and Capitals of Culture explored in this resource was to set up a separate company or trust involving local institutions, overseen by the local authority or council. This governance model can help to avoid administrative delays for delivery, including evaluation, in the short term. However, the creation of temporary organisations – often seen as sitting outside the city’s cultural ecosystem – can bring its own challenges. For example, it can put a strain on these temporary organisations to establish, develop and manage multiple partnerships across a short space of time.
  • Evaluations of UK Cities and Capitals of Culture are typically written by multiple authors (co-authors) and commissioned by groups that have an interest in the project in some way (stakeholders). They are predominantly impact evaluations, aiming to capture the performance of particular programmes and activities against specific performance indicators. As such, executive summaries focus on positive narratives of success and attention-grabbing headlines and how programmes have achieved the strategic aims of funders.
  • Evaluations of UK Cities and Capitals of Culture are often led by universities within the cities, which can help attract additional funding for evaluation and research programmes. It also enables organisations to bring together evaluation teams to draw on existing knowledge and partnerships with local communities that universities may already have. Employing a collaborative approach including advisory groups and independent researchers and practitioners can further help to enable a level of objectivity and rigour.
  • It is important to have multiple skills and experience within evaluation teams. Even with dedicated evaluation teams, many of the cities brought in specialist, independent consultants to focus on specific areas and methodologies, such as economic impact.  Other arts and culture projects may similarly benefit from additional evaluation resource, which could be built into funding bids. If following this model, it is essential to consider how the different strands of evaluation will be brought together to capture the overall impact, and by whom.
  • Overall, there are many challenges to evaluations taking place in such a broad arena with so many stakeholders, as a result of increased expectations on evaluation, impact reporting and learning. Evaluators need to take into account the needs of all these audiences, while also balancing the strategic objectives of their funders. However, a priority audience for evaluations such as those explored in this resource are the key people who have been involved in the work, such as local councillors and policymakers, cultural sector organisations and practitioners, and the delivery team, whose voices are most effectively captured in process evaluations.

Governance models

“It’s a mindset thing. Embedding evaluation at the start challenges stakeholders to think hard – and think once more – about what they are doing, to what or who, and why.”

Reflections on Evaluating the UK City of Culture animation, Coventry 2021 monitoring and evaluation team

As part of the bidding process for UK Cities and Capitals of Culture, partnerships are set up between local and community organisations and local authorities. While local authorities do not have to be the lead organiser in this partnership, they often play a leading role in governance.

Establishing a separate company

A separate company (sometimes known as a culture company or trust) is usually set up with responsibility for delivery. This company reports directly to a board consisting of representatives from local institutions and stakeholders. This is the most common delivery model (including beyond the UK context, for European Capitals of Culture more generally), as it allows for greater efficiency and bypasses slower local government processes.

For Glasgow 1990, the City of Glasgow, in partnership with Strathclyde Regional Council and Scottish Development Agency, directly commissioned independent consultant John Myerscough to carry out an evaluation. Since the turn of the century, monitoring and evaluation teams are typically set up by Culture Companies or trusts to carry out activity. Glasgow was the first European Capital of Culture to commission a 10 year legacy evaluation programme, led by the Centre for Cultural Policy Research at the University of Glasgow, which informed the evaluation approach undertaken by Liverpool 2008, below.

For Liverpool 2008, Liverpool City Council commissioned the Impacts ‘08 research programme from 2005-2010, which was delivered by a partnership between the University of Liverpool and Liverpool John Moores University, and overseen by a cultural research steering group. This steering group included individuals from Liverpool City Council, Liverpool Culture Company and Northwest Culture Observatory, Arts Council England, and the Northwest Regional Development Agency. A longitudinal research programme called Impacts 18 was later established in 2010. It was commissioned to build on the Impacts 08 work, and was run by the Institute of Cultural Capital at the University of Liverpool and Liverpool John Moores University.

For Derry 2013, the first UK City of Culture, the monitoring report explains that “a small monitoring group was established to support the process” including representatives from Ilex, Derry City Council, the Culture Company, Visit Derry, the Northern Ireland Tourist Board and marketing agency Velocity Worldwide (Monitoring Report on City of Culture 2013, p.6).

For Hull 2017, Hull’s Culture Company (renamed Absolutely Cultured in 2019) was set up in 2015 as a charity and governed by a board of trustees. The partnership between the Culture Company and the Council is described in their preliminary outcomes evaluation as ‘close and successful’ as it “fulfilled the principle of avoiding political interference in cultural activities, which was a pillar of the governance model designed in the bid” (Cultural Transformations: The Impact of Hull UK City of Culture 2017: Preliminary Outcomes Evaluation, p.190).  While their monitoring and evaluation team was made up of individuals from the Culture Company, evaluation activity was primarily delivered by the Culture, Place and Policy Institute at University of Hull. This work was overseen by an evaluation steering group (also known as Hull City of Culture Evaluation and Research Group), chaired by the University of Hull, included representatives from a number of funders and stakeholders, and was responsible for providing guidance and direction to the evaluation programme.

Coventry 2021 followed a similar model to Hull. Coventry City of Culture Trust was set up as the accountable body to the Department for Culture, Media and Sport (DCMS) and managed the core monitoring and evaluation group. This was supported by a technical reference group which reviewed, commented and advised on methods. The final evaluation report describes the University of Warwick, Coventry University and Coventry City Council as providing “anchor institution oversight” (Impact Evaluation Report, p.48).

This report states that Coventry City of Culture Trust was responsible for “capturing and releasing monitoring data, commissioning external contractors, reporting to funders, sign-off on interim and final reports, submission of final report” (p.48).  The core monitoring and evaluation group was responsible for evaluation activity, with oversight from University of Warwick, Coventry University and Coventry City Council.

It is important to note that these programmes are delivered by organisations coming together and building on existing teams in institutions. The partnerships and practices may continue or they may be preceded by City or Capital of Culture activity. But the structures themselves are temporary, even if they change form (as is the case with the Hull company Absolutely Cultured), or are dissolved early (as was the case with Coventry 2021 City of Culture Trust, which went into administration in February 2023).

Challenges with delivery models

The most common delivery model for UK Cities and Capitals of Culture is to set up a separate company, or trust, overseen by the local council. This governance model can help to avoid administrative delays for delivery, including evaluation, in the short term. While the delivery model of establishing an independent company seems to be a positive and effective approach, the creation of temporary organisations – often seen as sitting outside the city’s cultural ecosystem – can bring its own challenges.

Listen to audio

In this audio clip, hear from Jonothan Neelands and Mark Scott, from Warwick Business School and the Coventry City of Culture 2021 evaluation team. They discuss the challenge of positioning evaluations effectively, and getting narratives right at a local level.

Download transcript

Structure of evaluation teams

Due to the scale of the UK Cities and Capitals of Culture programmes, evaluations typically involve the work of broad teams of researchers, practitioners and consultants, who work closely and alongside the delivery teams in the culture companies or trusts.

These evaluations often rely on the delivery teams managing large amounts of data and information from various activities. Evaluation teams are then responsible for reporting on data from a wide range of sources, including:

  • councils
  • tourist boards
  • public agencies
  • organisations in the business communities
  • cultural institutions; and
  • population surveys.

Working with universities

Evaluations of later UK Cities and Capitals of Culture were often led by research teams based at the universities within the city.

While the University of Glasgow was not involved in the evaluation of Glasgow 1990, the Centre for Cultural Policy Research (CCPR) at the university funded the research exploring its 10 year legacy.

For Liverpool 2008, the Impacts 08 team, based at the University of Liverpool and John Moores University, included a director, programme manager, programme assistant (all full-time posts) plus over 20 project collaborators across universities, consultancies, and data providers. The Impacts 08 team produced all main evaluation outputs, including the main evaluation report, in-progress reports, and many of the themed reports.

The main evaluation reports for Hull 2017 were researched and written by the Culture Place and Policy Institute (CPPI) at the University of Hull, although the final report noted the “collective contribution” of many individuals and teams (p.97).

The core evaluation team for Coventry 2021’s evaluation included researchers from the University of Warwick, Coventry University and Warwick Business School.

The exceptions to this model were Glasgow 1990, and Derry-Londonderry 2013. Instead of university partnerships, they commissioned teams of evaluators from independent consultancies and research agencies, who authored the final evaluation reports directly.

Learning point

One of the strengths of the university model is that teams can attract additional sources of funding to bolster resources for research and evaluation activities.

For example, core funding for the activities of Liverpool’s Impacts 08 programme was complemented by parallel funding streams for additional activity by research councils, arts councils, funding for PhDs and European Commission/Cultural Policy Grouping funding. Coventry attracted AHRC funding for their work. While not a formal evaluation partnership with Derry 2013, Queen’s University Belfast ‘From Plantation to Peace: Derry/Londonderry as the UK’s first City of Culture’ project was funded by the Leverhulme Trust.

Outsourcing and working with specialists

All of the evaluation teams outsourced particular streams of activity to specialist teams that required particular skill sets and experience to deliver. Economic impact was an area that was outsourced to independent consultancy firms in all cases, with Coventry also outsourcing a social impact assessment, as well as some other independent analysis and data validation.

While the reports and outputs by these specialist teams were sometimes published, it was often the core monitoring and evaluation teams who retained editorial control, and who were responsible for managing these relationships with external contractors. Some reports also cited existing research from unpublished reports (for example, internal council reports or specific project evaluations).

These core evaluation programmes were also complemented by research by specialists in other universities, especially where teams came from a consortium of universities (for example in Liverpool and Coventry). Although not a formal output from the evaluation activity, Queens’ University Belfast research project ‘From Plantation to Peace’ published several research articles on Derry 2013 in various academic journal articles, which were cited in the final evaluation report. For Coventry 2021’s evaluation, a series of focus studies were also produced by different university teams.

Learning point

Evaluations of large, place-based events and programmes of activity rely on the management of evaluation teams with specialist expertise and skills, and large amounts of different types of data. Partnerships with universities can help teams to access additional funding and resources to develop complementary research and evaluation activity.

Working with teams from local universities can be useful for cultural organisations who are looking to do similar evaluation projects, even on a smaller scale. It enables organisations to draw on the existing knowledge and partnerships with local communities that universities may already have. It can also bring some level of objectivity to the evaluation. Outsourcing specific expertise might also be necessary to capture social or economic impact, or to work with particular groups or communities.

However, when outsourcing aspects of evaluation there is a risk that the overall narrative will become fragmented. It is therefore important to consider how different parts of the evaluation will cross-reference each other to capture a holistic overview of impact.

 

Listen to audio

In this audio clip, Centre for Cultural Value Associate Director Dr Beatriz Garcia reflects on the importance of balancing evaluation and research within the Liverpool European Capital of Culture model, and how evaluation teams need to include people with a range of skills.

Download transcript

Types of evaluation

There are different types of evaluation processes typically present in large-scale, mixed methods evaluations such as those used by the UK Capitals and Cities of Culture evaluations explored in this resource. These include:

  • monitoring activity, which is used to assess how project activity is performing, or has performed, against specific key performance indicators (KPIs) (e.g. ticket sale targets, target demographics);
  • process evaluations (sometimes called ‘formative evaluations’), which occur during the timeline of the project with the aim of feeding back insight to delivery teams;
  • outcome or impact evaluations (sometimes called ‘summative evaluations’) which typically take place at the end of a project or programme to provide insight into how and whether the activity has met its main objectives. These evaluations typically focus on the impacts on activity on participants and outcomes for stakeholders, such as funders.

Evaluations of UK Cities and Capitals of Culture are typically written by multiple authors (co-authors) and commissioned by groups that have an interest in the project in some way (stakeholders). Due to the flow of funding and the fact that evaluations are commissioned by those responsible for delivery of activity, they are not wholly ‘independent’ evaluations: the aims of the evaluations have to respond to, and be led by, the strategic aims and objectives set by the funders and wider stakeholders.

For example, Hull 2017 set as its first ‘arts and culture’ objective ‘to produce a high-quality programme of arts, culture and heritage, helping to position the UK City of Culture as the quadrennial UK cultural festival’ (Preliminary Outcomes Evaluation, p.7). This is a clear example of how the aims of an individual City of Culture’s activity are directly informed by the strategic goals of the national City of Culture programme.

In the Derry-Londonderry 2013 final report, the city council (as authors) noted their potential conflict of interest in undertaking an evaluation. The report stated that it was “not feasible within the scope of this evaluation to interview stakeholders and particularly board members and staff of the Culture Company which has now been wound up” but instead aimed to provide a “subjective reflection on the project management arrangements” (Final Report, pp. 67).  The report makes a number of reflections on learning, for instance on project management arrangements, challenges around ‘delivery of cultural programme, securing funding and sponsorship for cultural programme, financial management’ and ‘monitoring arrangements’ (pp.67-68). While these challenges are acknowledged, reasons behind them are not given nor explored in any great depth.

Measuring impact or learning for the future?

Teams also needed to strike a balance between producing insight into impacts and learning about processes. For Hull, the initial ‘preliminary outcomes’ evaluation published in March 2018 focused on outcomes and impacts, as did many of the other evaluation reports of the UK Capitals and Cities of Culture. However, in Hull’s final report published in November 2019, and then revisited in April 2021, the aim was to provide additional process evaluation research, ‘to identify key learning points from different aspects of the implementation of the Hull UK City of Culture 2017 project and to provide recommendations for future of Culture (CoCs) and for the legacy of Hull UK City of Culture 2017” (Main Evaluation Findings and Reflections, p. 8).

The Coventry team made it clear in their final report that “the responsibility for delivering an evaluation was held by the Trust and the evaluation presented here has a primary focus on the impacts of Trust activity. The evaluation is an impact study of the progress made towards agreed expected outcomes and impacts for the year. It is not a process evaluation of operational effectiveness” (p.7). However, the final report does contain some insights into the process and delivery of the programme.

Ultimately, evaluations of cultural mega-events are predominantly impact evaluations. They aim to capture the value and impact of particular programmes and activities against specific performance indicators and this results in occasional ‘editing out’ of findings or insight that might be seen as irrelevant or inconvenient to the primary narratives of success. Many executive summaries or key insights from evaluations focus on the attention-grabbing headlines, in an attempt to push the message of success. In addition, given the ambition of the delivery of the UK Capitals and Cities of Culture, and the competing demands on limited resource, it is not always possible to bring in additional detail about the context.

Transparency about who is evaluating

Being clear about authorship and how reports and other outputs are created can be just as important as explaining whose experiences are captured in the activity. When researchers are able to acknowledge and articulate how their own lived experience, professional role and personal position affects the contexts in which they work, the findings and insights from that work are more valid, easily accessible and relevant to those hoping to apply their findings to other contexts.

All of the evaluation reports listed details of the authors of the reports at the very least, with their institutional affiliations where appropriate. Providing some coverage of the processes of evaluation, such as the methods used and by whom, provided a certain level of transparency on whose voices were included and how they informed the findings. The exception was Coventry 2021, which not only provided details of the methodologies and authorship of all outputs but also produced additional outputs related to the evaluation teams’ own experiences of the process.

Learning point

Providing this type of context is a tricky balancing act. Too much irrelevant methodological detail can be off-putting and might dilute the effective communication of the message or learning. When authoring evaluation reports that cover broad topics and areas of impact, it might be as simple as being specific about what you are aiming to cover, and what you are not able to cover.

Where possible, it is crucial to answer the main questions that were driving your evaluation activity in the first place. This provides the reader with a good understanding of whose questions are being prioritised within evaluations, and how they have driven the evaluation processes to arrive at the answers.

By pulling together the evidence into a response to questions about whether or not objectives have been met, there is an opportunity to also pinpoint limitations to data and provide additional context to findings (e.g. see ‘Evaluation Questions: Mission Accomplished’ in Coventry City of Culture 2021 Impact report,  p. 158 – 160). This is an effective way of taking into account broader societal and environmental factors, as well as providing the reader with a direct and clear positionality on certain important topics.

 

Independence of evaluations

Independence in evaluation is a complex issue. An evaluation that is independent can be seen as adding objectivity and rigour. It is often seen as producing more ‘valid’ results, particularly if it uses methodologies such as triangulation or randomised controlled trials. However, being closer to the activity in some way can also be beneficial, because we understand the professional and community contexts in which learning and change can occur and our evaluations can take a more people-centred approach.

Ultimately, it is important that our evaluation outputs clearly communicate our aims – what we hope to gain from the evaluations – as well as interrogating our processes.

Read about how UK Cities and Capitals of Culture have done this in more detail in the How are Cities of Culture evaluated? resource.

Listen to audio

In this audio clip, Mark Scott, Research Fellow at Warwick Business School, and Jonothan Neelands, Professor at Warwick Business School, consider the challenge of conducting an impact evaluation in line with the expectations of the wider city.

Download transcript

 

Audiences for evaluation

“‘Partnership’ is a concept that in many cases masks the fact that some collaborations are more valued than – and are thus prioritised over – others.”

Hull City of Culture 2017 Main Evaluation Findings and Reflections

The governance and delivery models of UK Cities and Capitals of Culture directly inform the aims, activity and output of their evaluations. This can also be true for evaluations of smaller-scale activity within the arts, cultural and heritage sectors. Like many organisations and projects that receive public funding, UK Cities and European Capitals of Culture programmes are often expected to deliver impacts across cultural, social, and economic policy areas. Furthermore, these impacts are often expected to last beyond the programme, as legacy activity.

Derry-Londonderry 2013’s final evaluation report, or post-project evaluation, is structured primarily to detail the extent to which the City of Culture programme delivered on the strategies of their key stakeholders or what they termed ‘the business case’. The stakeholders are listed as “Programme for Government (PfG); Department for Culture, Arts & Leisure (DCAL) Aims & Objectives; Office of First Minister and Deputy First Minister (OFMDFM); Department for Social Development (DSD); Derry-Londonderry Urban Regeneration; Culture Company Aims & Objectives” (Final Report, p.18).

Coventry also made this clear in their report, citing their evaluation’s “alignment with HM Treasury’s Green Book guidance on appraisal and evaluation and HM Treasury’s Magenta Book central governance guidance on evaluation” (Impact Evaluation Report, p. 41).

Teams also found they needed to provide different evaluation insights for different funders. Additional streams of income need to be raised alongside the core funding from a City or Capital of Culture award. These funders have their own stipulations, and this often results in additional evaluation reports. For example, Hull 2017 produced two specific reports for the Heritage Lottery Fund, and Coventry 2021 produced a number of reports for the environmental, sustainable and social aspects of their activity: Green Futures Programme report (released January 2023); Love Coventry Programme report (February 2023); and Caring City Programme report (March 2023).

Evaluations for funders

Funders are a key audience for evaluation activity, particularly when demonstrating how project or programme activity has contributed to their strategic goals, and provided ‘value for money’ or ‘return on investment’. Recent research from the Centre for Cultural Value found that 78% of those surveyed felt that funders and policymakers had ‘the most’ or ‘a lot’ of influence on evaluation aims. Explaining activity to funders and policymakers was seen as a top priority for the sector (The role of evaluation and research in arts, cultural and heritage organisations, p.4).

However, evaluation reports that are publicly available often need to be accessible to, and understood by, multiple stakeholder groups, including:

  • local communities and institutions
  • practitioners and artists working within the cultural sector
  • local and national policymakers
  • participants and audiences; and
  • members of the public

The information and insight contained within evaluation reports can also be a great source of learning and insight for the wider cultural sector, policymakers and researchers.

Due to the multiple stakeholders for these Cities and Capitals of Culture, the evaluations of the later events in particular are authored by a number of different researchers and stakeholders. The content is dominated by findings that address how cultural programmes meet the strategic aims of the funders and commissioners.

Who produces final reports?

Many different groups of people produce the reports and other outputs. Details of individuals within these groups is usually provided, but the authoring and editing credits are often attributed to the organisation or group – for example, the culture company, the council or the monitoring and evaluation team. This can result in an impression that the authors have agreed on all views or opinions, when in fact it may have only been a view expressed by a small and select group of people. Many of the reports do reproduce quotes or excerpts from interviews or focus group activity, but it is often difficult to assess the context in which they were produced. Most of the time they are used to provide some additional detail to existing quantitative measures or statements of impacts.

Listen to audio

In this audio clip, Centre for Cultural Value Associate Director Dr Beatriz Garcia explains how evaluations need to balance exploring different narratives of values in themed reports and outputs, in relation to the Liverpool European Capital of Culture 2008.

Download transcript

Evaluation Principles in practice: Many-voiced

The Evaluation Principles are a set of co-created principles that can be used to inform and guide evaluation practice. The many-voiced principle directly relates to the examples we have looked at so far in relation to who evaluates UK Cities and Capitals of Culture.

“People-centred evaluation needs to be empathetic, many-voiced and socially engaged. It is hard for large-scale evaluations to lean into individual’s specific needs and interests, and usually impossible for them to co-create explorations of cultural value with local and sometimes marginalised groups. However, smaller-scale and more localised evaluation can explore the impacts of cultural policy effectively with local communities. In turn, this promotes direct representation and citizen engagement.”

Ben Walmsley and Anna Kime, Putting people and communities at the heart of cultural policymaking

Taking a many-voiced approach to evaluation ensures that evaluation activity considers a diversity of viewpoints and experiences, which leads to better insights.

You might find it useful to think about:

  • Who are the dominant voices in the UK Capitals and Cities of Culture evaluations? How does this inform what is treated as valuable and worthy of including?
  • How do the evaluations include a broader range of voices and perspectives?
  • How are people’s voices and perspectives represented in reporting? Are outliers and exceptions included? Are people’s own words and source data made available?

Summary

Evaluations of the UK Cities and Capitals of Culture explored in this resource cannot be considered as completely independent, as many of those who evaluate activity are also beneficiaries or stakeholders of that activity, and are directly commissioned by those responsible for delivering it. Instead, evaluation activity is typically characterised by co-authorship across a number of different stakeholders, typically involving the work of varied teams of researchers, practitioners and consultants, with specialist evaluation activity outsourced to independent agencies.

Many of the more recent UK Capitals and Cities of Culture commissioned interdisciplinary research teams from local universities to manage the evaluation activity, which often brought in additional resource and funding. All of the evaluation teams outsourced particular streams of activity to specialist teams that required particular skills and experience.

This resource has explored the ‘who’ of evaluations, such as the teams, structures, aims and audiences behind the UK Cities and Capitals of Culture evaluation activity. Take a look at How are UK Cities and Capitals of Culture evaluated? for more detail on the methodologies and methods used.

Glossary

Anchor institution A large organisation which has a key role to play in the local area and the local economy, for example a council, colleges, universities, hospitals or large businesses.
Culture Company (see also Trust) A separate company or group which is set up to be responsible, and accountable, for delivering a City or Capital of Culture.
European Capitals of Culture An initiative developed in 1985 with support from the European Union (EU). This resource refers only to those hosted by UK cities: Glasgow 1990 and Liverpool 2008. After Brexit, the UK was no longer eligible for this competition. Read more on the European Capitals of Culture website
Evaluation activity The tasks that are required in order to evaluate, for example designing an approach, conducting interviews, analysing data, or running focus groups.
Evaluation output How evaluation findings are presented and shared, for example, the production of reports.
Formative evaluation An evaluation that takes place before or during the timeline of a project with the aim of feeding back insight to delivery teams and/or into delivery itself.
Governance A term that refers to the accountable body providing oversight of the company or organisation that is responsible for delivering the project or activity; for example, a board of trustees of a registered charity.
Impact/ outcomes evaluation These typically take place at the end of a project or programme to provide insight into how and whether the activity has met its main objectives. These evaluations typically focus on the impacts on activity on participants and outcomes for stakeholders, such as funders.
Key performance indicator (KPI) These are metrics that are used to track progress towards a specific objective. They tend to be quantifiable to allow for easy measurement across time, and are often agreed as a requirement for funding in the early stages.
Methodology The overall approach researchers take to study something. It includes the plan for carrying out the research and the methods used (for example surveys, interviews or analysing data).
Mega-events A term that is often used to describe special events of a very large scale (in terms of budgeting, audiences and participants) that take place infrequently and which often have national or international media reach. The Olympic Games, FIFA World Cup or Universal Expo are classic examples of mega-events. Since 2015, the term ‘cultural mega-event’ is increasingly used to describe one-off cultural events such as European Capitals of Culture or national cities of culture programmes, such as the UK City of Culture programmes explored in this resource. However it remains a contested term.
Process evaluation (see also Formative evaluations) These typically occur before or during the timeline of a project with the aim of feeding back insight to delivery teams and/or into delivery itself.
Randomised controlled trials A method usually used in scientific or experimental studies to examine cause-effect relationships between a particular intervention and an outcome, by controlling as many variables as possible.
Stakeholders A broad term to describe groups of people who are invested in the project directly (such as funders), or who will be influenced or impacted by it in some way (such as local community groups).
Summative evaluation See Impact/Outcomes evaluations
Triangulation The use of more than one type of method to study the same phenomenon, for example methodological triangulation.
Trust See Culture Company
UK Cities of Culture Category referring to the cities which were successful in the UK-wide City of Culture programme run by the Department for Digital Culture and Sport (DCMS). Read more on the City of Culture pages of gov.uk

References

We have reviewed a range of evaluation reports from UK Cities and Capitals of Culture to inform this resource. Take a look at our Cities of Culture evaluations: quick guide for a summary of the reports that are available.

All links are correct at time of publishing. If you spot a broken link, please let us know.



You may also like


Photo of a boat on a body of water with fireworks behind it, at night. People in silhouette watching from the edge of the water.

How are UK Cities and Capitals of Culture evaluated?

In this resource we look at how UK Cities and Capitals of Culture have approached their evaluations. Which methods and frameworks did they use, what was successful and where were the challenges? While the focus is on large-scale events, this resource will be helpful for anyone who has an interest in measuring the impact of cultural projects, big and small.

 
A person wearning headphones and holding up a mobile phone, in an ornate building. They are listening to an audio tour.

UK Cities and Capitals of Culture evaluation reports: a quick guide

Are you interested in the evaluations of large-scale projects, and want to delve into the detail of specific events or topics?

This quick reference guide signposts the key evaluation reports from recent UK Cities and Capitals of Culture. It will be of interest for academics, researchers and those exploring the evaluations of place-based projects in more depth.

 
Lit up castle with crowds

Sharing learning: Cauldrons and Furnaces – Managing multiple agendas and ownership in large-scale projects

Freelance project director Clare Williams reflects on managing multiple agendas and questions of ownership in complex, large-scale projects such as Cauldrons and Furnaces / Crochan a Ffwrnais (part of the Wales-wide project, ‘Power of the Flame’). You can read more below or download (PDF document) by clicking the button at the top of the page. ... Read more

 
Published: 2024
Resource type:


Creative Commons Licence Except where noted and excluding company and organisation logos this work is shared under a Creative Commons Attribution 4.0 (CC BY 4.0) Licence





 
 

Esmee Fairbairn Foundation

The Evaluation Learning Space is supported by the Esmée Fairbairn Foundation and led by the Centre for Cultural Value in partnership with CultureHive, the Arts Marketing Association's knowledge hub.



Interested in evaluation? Join the Cultural Evaluation Network on LinkedIn.LinkedIn Group