In partnership with CultureHive, the AMA's knowledge hub

How are UK Cities and Capitals of Culture evaluated?

By Emma McDowell, Centre for Cultural Value

In this resource we look at how UK Cities and Capitals of Culture have approached their evaluations. Which methods and frameworks did they use, what was successful and where were the challenges? While the focus is on large-scale events, this resource will be helpful for anyone who has an interest in measuring the impact of cultural projects, big and small.

Photo of a boat on a body of water with fireworks behind it, at night. People in silhouette watching from the edge of the water.
Image: Derry-Londonderry 2013 (Derry City & Strabane District Council)

How are UK Cities and Capitals of Culture evaluated?

Introduction

Evaluating a large-scale cultural project such as a City or Capital of Culture is not straightforward. There are a wide range of activities to evaluate, multiple stakeholders to involve and consult, and the evaluation is often expected to demonstrate numerous and often highly ambitious outcomes and impacts.

Although cultural mega-events may be unusual in terms of scale and budget, their size and complexity offer rich learning and can provide a useful starting point for exploring how evaluations of place-based programmes can work in practice. This investigation can also help us think through the purpose of evaluation, what principles should underpin it, and whether the focus is on meeting funder or stakeholder requirements or embedding a culture of learning.

How then have the evaluation teams for these large-scale and mega-events approached the task? Which methods and models have they used and why? And what can we learn from these programmes when planning our own evaluations?

This resource uncovers key insights from the evaluation outputs of the following three UK Cities and two European Capitals of Culture programmes that took place in UK cities between 1990 and 2021, referred to throughout as ‘UK Cities and Capitals of Culture’:

  • Glasgow European Capital of Culture, 1990
  • Liverpool European Capital of Culture, 2008
  • Derry-Londonderry UK City of Culture, 2013
  • Hull UK City of Culture, 2017
  • Coventry UK City of Culture, 2021
Map of the UK in block blue with white text. The following places are pinpointed on the map with a small white square - Glasgow European Capital of Culture, 1990, Liverpool European Capital of Culture, 2008, Derry-Londonderry UK City of Culture, 2013, Hull UK City of Culture, 2017. Coventry UK City of Culture, 2021
Map of UK Cities and Capitals of Culture 1990-2021

This resource is not just for those who are focusing on large-scale events such as Cities or Capitals of Culture. Throughout this resource we share insights that can support cultural practitioners, researchers and policymakers to enhance their evaluation practices for projects of all shapes and sizes, from deciding on evaluation approaches and methods to communicating findings effectively with audiences and stakeholders.

By reviewing the evidence, we provide a summary of the evaluation programmes’ duration, which methodologies (research approaches) were used and how the findings were shared.

If you’d like to know more about the Cities and Capitals of Culture programmes, take a look at our Cities of Culture FAQs.

Evaluation Principles

You can also find out how evaluations of UK Cities and Capitals of Culture can be viewed through the lens of the Evaluation Principles, a set of co-created principles to inform and guide evaluation practice. In this resource we consider how the robust principle ‘rigorous’ relates to the reports we looked at, and how it might be used in practice.

Key insights

We have reviewed the main reports and outputs relating to evaluation of UK Cities and Capitals of Culture from 1990-2021, drawing out the following learning:

  • All evaluation teams identified that establishing a baseline, or a fixed starting point, for data and metrics was essential. There is a constant demand for data during evaluations of this nature, which puts pressure on teams to balance short-term and long-term narratives of value and insight.
  • It is important for funders to build in time and resource for evaluation before, during and after an event programme, to capture a more longitudinal view of impact. It was evident that as financial costs and the impacts of cultural programmes come under greater scrutiny from politicians, the media and the public, there are significant time pressures on evaluation teams to produce reports and findings as soon as possible after the event has finished.
  • Evaluations of large-scale programmes involve producing and circulating large volumes of data, to serve the different needs and agendas of wider teams and stakeholders. When stakeholders acknowledge this, and evaluation teams are able to be transparent about the challenges they face, this results in more effective partnerships. A key part of this work lies in stakeholders supporting evaluators to be flexible to the demands of the process.
  • All the evaluations used a mixed-methods approach, with quantitative and qualitative methods, generating both new data and using existing research to inform findings. This was important for capturing both breadth and depth of impact across a range of different impact areas.
  • Cultural organisations and practitioners who are working on longer projects have the opportunity to pace evaluation activity and benefit from iterative evaluation and learning. Evaluation frameworks such as logic models, theories of change and the Wheel of Change can be effective tools to demonstrate how activities, outputs, outcomes and impacts relate to one another.
  • There are multiple audiences and stakeholders for UK Cities and Capital of Culture evaluations who have different vested interests. In this context, evaluation teams needed to plan how best to communicate their findings using a variety of approaches.
  • Creative formats that move beyond the traditional report were particularly successful in telling powerful stories of impact and capturing a strong sense of place, as well as bringing in underrepresented voices. Some examples also showed that creative research methods can be useful throughout the evaluation process, not just at the end of a project. These approaches help people understand their own role in participation, as well as highlight different perspectives, and even disagreements, about what is valuable.

Timelines of activity

Establishing a baseline

“[W]hatever form the evidence takes it must describe an evidenced change from a baseline position to a post-project change.”

Evaluation for change framework, Centre for Cultural Value

The UK Cities of Culture and European Capitals of Culture in the UK were funded year-long programmes of activity, but activity needed to happen before and after these years too. The evaluation teams recognised that to demonstrate impact or change, they needed to first establish a ‘baseline’, or starting point, in order to demonstrate growth, or change, of particular metrics and measures.

For Glasgow 1990, the evaluation was commissioned halfway through the year (in May 1990) which only allowed surveys to take place in the second half of the year (July 1990 – January 1991). However, the evaluation also drew on data from 1989, plus ‘preliminary figures on the first six months of 1990’ to ‘establish baseline attendance and activity data’ (Monitoring report, p.5).

The Liverpool European Capital of Culture 2008 team had the opportunity to design their evaluation activity across a longer period, known as a ‘longitudinal model’. This was because Liverpool had five years of funding to cover “Liverpool’s pre-bid period (2000), through the bidding and nomination stages (2002-2003), event lead-up (2004-2007), the event year itself (2008) and beyond (early 2009)” (Impact report, p.4).

It is not unusual for European Capitals of Culture to attract investment of £60 million, whereas UK Cities and Capitals of Culture often have budgets of less than half (around £20-£30 million). Despite the lower budgets of the subsequent UK Cities of Culture (Derry-Londonderry, Hull and Coventry, Liverpool’s longitudinal model has nevertheless influenced the evaluations for both Hull and Coventry.

Hull’s evaluation framework included baseline research and evidence, collected through the bid process, and by evaluating ‘curtain raiser’ or teaser events that took place in the year before the City of Culture programme started in 2017. Hull used techniques known as formative evaluation (evaluation completed in the very early stages of a programme of activity) to capture discussions on the process of securing the UK City of Culture title in 2013, and the preparations which took place before the end of 2016.

The demand for data

“At times, there were tensions between immediate needs for data … and the Evaluators’ responsibility to provide an evidence-led narrative of progress within an environment of change and uncertainty.”

Coventry 2021 Impact evaluation, p.61.

The evaluation teams behind Liverpool 2008, Hull 2017 and Coventry 2021 produced regular reports to inform activity.

Liverpool 2008 produced a baseline report in 2006, and interim findings in 2006, 2007 and 2008, as well as quarterly monitoring reports for the funders and key stakeholders.

Hull 2017 produced interim findings in May 2017, reflecting on the first season of the programme. They also compiled quarterly monitoring reports produced for their funders.

Coventry 2021 took a similar approach. Their evaluation strategy covered a five-year period from 2019 – 2024. During the year of culture itself, they published progress and supplementary reports between May 2020 and October 2021, before a final evaluation report in October 2023.

Learning point

Evaluations of Cities and Capitals of Culture involve managing the expectations and agendas of a large number and complex range of different stakeholders, which puts pressure on the need and volume of data and insights to be produced and circulated. As the final Coventry report outlines, this immediate need for data and commentary to feed into delivery processes was a particular challenge as it relied on the production of quality data and lengthy sign-off processes from a number of stakeholders, (Impact evaluation, p.61).

The administration and management of data and reporting needs to be managed carefully, especially for cultural programmes that include intense periods of delivery.

 

Capturing impact and value after the event

All UK Cities and Capitals of Culture published reports drawing together the final evaluations at least one year after the event programme. The exception was Hull 2017, which published a ‘Preliminary Outcomes report’ and a summary report three months after the end of the programme, in March 2018. The final evaluation report followed in November 2019, and was subsequently updated in April 2021.

Learning point

As the financial costs, expected and actual impacts of cultural programmes and their legacy become under greater scrutiny from politicians, the media, and the general public, there are significant time pressures on evaluation teams to produce reports and findings as soon after the event has finished as possible. Collating, analysing, reporting and getting sign-off from governing bodies and delivery teams takes time, as does carrying out ‘primary research’ and fieldwork, such as interviews and post-event surveys. Mistakes in analysing and reporting on data can happen for many different reasons.

For example, in Coventry, ticketing and programming data supplied by the Coventry City of Culture Trust required “extensive work” to correct “significant variances” and mitigate “human error”. This created significant challenges for the evaluators to be able “to report almost in real time and in a transparent manner.” (Impact evaluation, p.64).

Evaluations like these involve producing and circulating large volumes of data, to serve the different needs and agendas of wider teams and stakeholders. When this is acknowledged and understood by stakeholders, and evaluation teams are able to be transparent about the challenges they face and supported to be flexible to the demands of the process, they are able to better manage expectations, resulting in a more effective partnership. While corrections can be published, and reports can be republished, to do so may result in reputational damage and mistrust of the evaluation and delivery processes, as well as negatively impact stakeholder interest.

 

Listen to audio

In this audio clip, Centre for Cultural Value Associate Director Dr Beatriz Garcia discusses the timing of reporting outputs and the importance of building in time for reflection and analysis.

Download transcript

Evaluation frameworks

Broadly speaking, an evaluation framework is used to map out the planning and delivery process of evaluation activity. It will outline the strategy – or narrative – of how you will work towards a series of changes and how you know what impact your activity is having on those involved.

This section will look at a few examples of evaluation frameworks used in Cities and Capitals of Culture that might be requested by funders at the initial stage of applying for funding, particularly for large-scale events.

Due to the diversity and range of stakeholders with interests in UK Cities and Capitals of Culture, the evaluations were often based on evaluation frameworks that helped structure activity across longer timeframes and different evaluation teams.

The frameworks explored here include logic models and theories or stories of change. These models provide a way of connecting programme aims with activity, output and anticipated outcomes and impacts, helping to articulate and communicate how your evaluation activity is going to monitor the changes that your work will make.

A theory of change, story of change, or logic model, illustrates how change is expected to happen in a particular context. These models can be particularly useful in planning activity to make sure that activity is aligned with the desired impacts. Evaluators can use them to communicate how they will be evaluating particular strands of activity within a programme, and how these might relate to the expected outputs and impacts.

Monitoring frameworks

In any evaluation, it is important to understand what data you are capturing or generating, and how you are monitoring your progress. Monitoring frameworks or diagrams can be useful to show the different data sources that inform your evaluations.

Monitoring frameworks do not necessarily tell the ‘story’ of an evaluation, so much as illustrate the different data sources and reporting activity that might take place across a large-scale evaluation. They can offer a useful way of explaining how data will be managed and shared across different teams, as well as outlining what data will be monitored and how. An example from Coventry’s final impact report is given below. It shows how different data sources, contractors or partners feed into the evaluation processes.

Graphic illustrating teams and evaluation focus areas were involved in the evaluation of Coventry City of Culture 2021. This is a complex image. Full description can be downloaded. below image.
Graphic illustrating teams and evaluation focus areas were involved in the evaluation of Coventry City of Culture 2021. Download image description.

Logic chain or logic model

Liverpool 2008 and Hull 2017 both developed themes, or ‘thematic clusters’, to demarcate the wide range of different anticipated outcomes and impacts of their respective City of Culture programmes.

Example of a logic chain model – Hull 2017

This section will use one of the Hull 2017 themes, ‘Arts and Culture’ to explore the logic chain model in more detail. Hull 2017 reported across five individual themes: Arts and Culture; Place Making; Economy; Society and Wellbeing; and Partnerships and Development.

In the evaluation plans, each of these themes had its own evaluation framework, which outlined the theme, aims and objectives, followed by measures and the number of indicators. The main evaluation report then provided a ‘logic chain model’ summarising the outcomes associated with each aim: ‘intermediate’ outcomes and ‘ultimate’ outcomes. It also details how the activities delivered are related to outputs and then outcomes.

The Hull 2017 example below is taken from the Arts and Culture logic chain model (Main evaluation report, p.24).

1. Aim

The logic chain begins with the broader aims for the activity. These might be connected to your overarching vision or mission statement for your organisation or event.

Black down arrowHull 2017 example: To develop (new and existing) audiences for Hull and East Riding’s cultural offer locally, regionally, nationally and internationally.

2. Activities delivered

The next level on the logic chain details the activities delivered in order to meet this aim.

Black down arrowHull 2017 example: 365-day cultural programme, public engagement programme, and sector development and capacity building initiatives.

3. Outputs
After the activities comes the particular ‘outputs’ of the delivered activity. Outputs describe the ‘products of the activity’, for example, an event, an experience, an exhibition or a marketing campaign.Black down arrowHull 2017 example: ’Audience development activity’, or activity that is used to attract larger and often more diverse groups of people to take part in cultural activity.
4. Intermediate outcomes

Outcomes can be understood as the short-term, measurable difference that the activity has made. The logic chain model for Hull 2017’s evaluation split the outcomes into two categories: ‘intermediate outcomes’ and ‘ultimate outcomes’.Black down arrow

Hull 2017 example:

  • More diverse audiences inspired to attend cultural events.
  • Audiences/participants have a positive experience and want to try other events/ activities.
5. Ultimate outcomes (or “impacts”)

The final stage of Hull 2017’s Logic Chain model referred to the ‘ultimate outcomes’. While outcomes usually refer to more short-term activity, longer-term outcomes or impacts may not be fully realised. In this sense, they are supposed to be ambitious and aspirational and might focus on the legacy or longer-term difference you aimed to deliver.

Hull 2017 example: Audience groups attend cultural activity in Hull and East Riding more frequently in the future.

Example of a story of change model – Coventry 2021

The Coventry 2021 team used the story of change model (also known as a theory of change model) to pull together the overarching narrative of their evaluation strategy, from the vision of the UK City of Culture programme through to the anticipated impacts of the programme on wider society. This was published and further detailed in a 24-page report detailing their evaluation strategy, as well as being used in many of the reporting outputs, such as the final impact report.

The headings in Coventry’s story of change were:

  • Investments
  • Activities
  • Outputs
  • Outcomes
  • Impacts

It was different from the logic chain model used by Hull 2017 (see above) because it did not show the causality from activities to outputs, but instead listed them side by side. This approach allowed a certain flexibility and crossover of the different elements in the model. It is not always easy to pinpoint which activities result in which outputs, as they may result in more than one. While Hull 2017’s logic chain model aimed to show which outcomes and outputs were related to which activities and aims, Coventry’s story of change simply listed the various activities, outputs and outcomes.

However, both Hull 2017’s logic chains and Coventry’s story of change were clear about what outcomes related to which anticipated impacts. They were both used to further inform and structure the narratives of the final reports. Hull 2017 and Coventry 2021 also provided additional information in their appendices. For example, in Coventry’s impact evaluation report, Appendices 1–4 covered Output and Outcome Indicator tables. Each output was given an ‘output indicator’ and then a ‘final qualitative and quantitative evidence of position’ outlined in a table. Each outcome similarly corresponded with an ‘outcome indicator’, a ‘baseline’ and ‘endline’ figure and then a narrative outlining whether this had been achieved (Impact evaluation, pp. 164-199).

There are pros and cons of these linear models, which present progress from one stage to the next in single steps. They can be really useful to understand how something directly causes an effect or impact, and can be helpful to plan and check if activity has made a difference. However, they can sometimes make things seem simpler than they really are, or miss the full picture. The next section will explore models which can be helpful if you need to demonstrate more complex ideas about value.

Wheel of change model

The logic chain and story of change models are fairly linear models that begin at one end and end at the other. However, we also know that monitoring progress towards anticipated outcomes and impacts was a constant ask, with (ideally, but not always) regular evaluation feeding back into the delivery processes when possible.

There is more opportunity to pace evaluation activity and benefit from iterative evaluation and learning for cultural organisations and practitioners who are not working on short and intense programmes of work.

Diagram of the wheel of change framework showing an outer circle and inner circle. Outer circle is four connecting arrows in green, purple, red and blue, containing the words 'OUTPUTS' 'ACTIVITIES' 'INVESTMENTS' 'OUTCOMES'. Inner circle is orange, containing the words 'AMBITIONS AND IMPACTS
Wheel of change evaluation framework, Professor Jonothan Neelands and Dr Beatriz Garcia

The Centre’s recent guide to the wheel of change evaluation framework explores this model in more detail. It is better suited to organisations, rather than temporary delivery teams delivering festivals or events that can approach evaluation as a process embedded in organisational practice. The guide explains more about the wheel of change evaluation framework, outlining the process of moving from defining your ambition and desired impact to actual results. The resource includes examples of evaluation questions to consider and a case study of the framework in practice for a small-scale cultural organisation.

Challenges and limitations of the frameworks

“[O]ne of the key challenges throughout the outcomes evaluation has been understanding the extent to which any changes recorded can be directly attributed to Hull being awarded UK City of Culture status for 2017”

Main evaluation report, p.9.

These evaluation frameworks and tools can help you to plan, manage and communicate how your data and research capture your activity’s goals. However, some reports described the limitations of this activity and the challenges of putting plans into action.

Liverpool’s report described how the Impacts 08 research programme aimed to overcome “the traditional limitations of short-term impact research”. They created a model with connected themes, “exploring processes as well as outcomes” which, similar to Hull, allowed them to examine both what happened and how things changed. Like Coventry, they made sure they considered the context surrounding the data (Impact evaluation, p.5).

Providing baseline figures and recording change over time can help to show if an activity has had an impact, but breaking down the results into smaller, measurable parts (known as disaggregation) can still be difficult.

There are various techniques that can be used to tackle this challenge. For example, the Hull evaluation team stated that “[w]here relevant and possible, we have tested attribution, through the analysis of wider contributory factors, exploratory stakeholder consultations, and the use of benchmarking against past trends in Hull and against national performance data” (Main evaluation report, p.9).

Learning point

The evaluation models can help to provide information on the focus and strategies of each cultural programme. However, as Coventry found, they can also be complex and difficult to navigate, especially if you’re not used to using them.

While the theory of change model “positively aligned activities and outputs with outcomes and impacts” there was nevertheless “some front-line confusion about the model and a lack of clarity about which outcomes were relevant to planned activities” (Impact evaluation, p. 11). It is important to position the models in wider communications about vision and strategy. In particular, it is important to consider what areas may not fall under the remit of your programme and therefore what will be impossible to deliver.

 

Listen to audio

In this audio clip, Centre for Cultural Value Associate Director Dr Beatriz Garcia considers how the processes of evaluation can be as rich with learning as the final results.

Download transcript

Methodologies and methods

Methodology is a term used to describe the overall approach researchers take to study something. It includes the plan for carrying out the research, the methods used (for example surveys, interviews) and analysing and reporting on data. It is important to consider the ways in which we study and learn about something, because it affects what we believe to be true about it.

For example, a quantitative methodology (using numbers or data) might track and measure specific metrics that are related to key performance indicators (KPIs) that have been agreed for outputs and outcomes. This data might be required or requested by stakeholders and funders, to hold delivery partners to account. In the evaluation of large-scale events and programmes of activity, quantitative methodologies are dominant as in principle they allow for comparison with other events and policy areas at a government level. Headline numbers and percentages can also be useful for publicity and reporting, for example, in press releases and infographics.

Qualitative methodologies, on the other hand, are able to provide a deeper understanding of the ‘how’ and ‘why’ of particular phenomena, within its original setting or context. This more in-depth data is more people-centred and can be rich in detail, and is particularly helpful in cultural evaluations to describe how different stakeholders have experienced the UK Cities and Capitals of Culture programmes. It can also be used to understand why particular change and impact occurred and for whom, which is crucial to inform learning at all levels. However due to the richness and depth of qualitative data, it can often be harder to generalise across wider populations, and can be more time and resource intensive.

UK Cities and Capitals of Culture combined many different methods from quantitative and qualitative traditions, depending on which tools were needed to capture and generate the data. The methods chosen depended on what aspect of the programme was being evaluated, and what research questions were being asked.

Here are some of the most common methods used in the evaluations.

Surveys of audiences, visitors and participants

Many teams sent surveys to audiences, visitors and participants in cultural events and activities. Surveys collect information in a systematic and standardised format, allowing for easy collation and comparison across data sets. The questions included were designed to capture the experiences, impact and levels of satisfaction of those who engaged directly with activities. Surveys can be conducted across multiple media – online, offline or over the phone – but the latter options do require you to have captured contact information, which may need consideration for free events.

For Derry-Londonderry 2013 surveys of attendees were a key source of insight. Event surveys were carried out by research consultancy Ilex, the Northern Ireland Tourist Board, Derry City Council and the Culture Company, sampling a number of key events, “chosen using a number of criteria including their scale, age profile of attendees, gender, geographic spread of expected audience” and “whether it was a free or paid event”.

Sampling events in this way helped to ensure a good spread across the event programme, with the aim of providing a “robust evidence base for monitoring the impacts”. Surveys were conducted in person by volunteers trained in survey methods. People were asked a number of questions around their geographical origin (postcode), demographics (such as age, gender), levels of satisfaction and mode of transport to the event (p.6 Monitoring report).

 

Population surveys and market research

Many of the UK Cities and Capitals of Culture evaluations also included surveys for those who might not have attended or directly engaged with the events and activities as part of the programme. These were used for a variety of different reasons:

  • to provide baseline levels of information before activity began, as a starting point from which to measure change. This enabled evaluations to demonstrate impact over time.
  • to provide useful information of a local population to compare to the same metrics from audience and visitor surveys. This enabled evaluations to show how engagement might have had a direct impact, compared with non-attenders.
  • to build on existing population-level surveys that are already in circulation and are managed by local authorities. This pooling of resources is effective partnership working; by adding questions about cultural activity to existing surveys, evaluations can get a sense of how cultural participation fits in with wider policy priorities (such as education, health and social provision).

Coventry 2021’s evaluation included a series of ‘sentiment surveys’ that were designed to ‘capture the feelings and sentiment of residents who were possibly not engaging with the UK CoC 2021 programme’. This was in addition to the population survey (the Coventry Household Survey), which fed into the baseline evaluation of the City of Culture and into the KPIs on outputs and outcomes indicators.

A representative sample of citizens from across the city was surveyed in three waves: January 2021 (prior to major programme announcements); December 2021 (at the end of the year of culture programme), and July/August 2022, the following year. This evaluation strand was managed by M-E-L Research. They surveyed 2700 people in total, through telephone interviews and door-to-door surveying. Calls were made to randomly-generated phone numbers and door-to-door surveying was used in areas with a low response rate to the telephone approach. (Impact evaluation report, p.54)

 

Surveys to target groups

The UK City and Capital of Culture evaluations often sent additional surveys to particular groups, such as schools and higher education institutions, local businesses, cultural institutions, arts workers and artists. This method can be used successfully in combination with other methods to provide a fuller picture not just of impacts, but also of those groups who may be significantly involved in programme activity.

Glasgow 1990’s evaluation involved comprehensive surveys of arts organisations and institutions across two phases. The first phase (July 1990) asked arts organisations to provide their baseline attendance (for example, visitor numbers) and activity data (for example, number of exhibitions) for 1989 and the first six months of 1990.

The second phase aimed to complete the picture on attendance and activities for 1990, and also asked for information on employment and finance for 1990 as a whole. The team then followed up over the telephone for more supplementary information (as well as chasing data from a few non-respondents). They also used data from other sources (such as published accounts and annual reports). (Monitoring report p.5).

 

Methods for measuring economic impact and social ‘return on investment’ (ROI)

Economic impact analysis tends to be a specialist area, and was always outsourced to consultancies and agencies by City and Capital of Culture evaluation teams. These types of impact methodologies often use surveys, interviews and discussions with stakeholders and analysis of a variety of data sources. They are used to explore metrics such as levels of direct spending within the local economy as a result of the City of Culture, or perceptions of impact on local employment levels.

The Liverpool 08 Economic Impact Methodology report details how England’s Northwest Research Service (ENRS) aimed to measure the economic impact of visits “influenced” by the Liverpool European Capital of Culture, with a focus on “the number of additional visits created by ‘08, the estimated spend from these visits” and “the jobs created or supported by the year’s programme” (p.2).

This short report explains the data sources that were to be used (and how they “interact”), inclusions and exclusions, how both direct and indirect spend was calculated and the timescales for this work. There are a number of additional reports reviewing economic impact methodologies that can be accessed on the Impacts 08 Economy & Tourism web page.

 

These methods are also used to provide a summary assessment of the ‘value for money’ of the City or Capital of Culture programme. This looks at levels of expenditure and income, but also involves the monetisation of outcomes that do not otherwise have a financial value or market effects, such as the welfare and wellbeing of a population.

Social impact analysis is becoming increasingly common as a form of evaluation. In addition to an Economic Impact Assessment, Coventry 2021 commissioned MB Associates to carry out a Social Return on Investment (SROI) study. These types of studies can take a ‘top-down approach’ which adopts standardised models and measures to determine value, and assume a shared value across stakeholder groups. However, the approach adopted by MB Associates was described as “stakeholder-oriented”, which means that it assessed the value created from the perspective of stakeholders involved in the City of Culture programme.

The stakeholders included communities, trust employees, funders, and other partner organisations. Return on investment is calculated for projects by describing the monetary amount of social value created for every pound spent on the project. Evaluation activity included workshops, surveys, interviews, social media analysis and monitoring reports from organisers. These are used to report on a range of outcomes and impacts on self-esteem, local cultural opportunities, skills, personal and professional relationships across a number of projects. You can read more about the SROI study in Coventry’s Social Impact Analysis report.

 

Stakeholder interviews and consultations

Every evaluation of the UK Cities and Capitals of Culture used consultation interviews with key stakeholders. These might be public agencies, organisations in the business communities, cultural institutions in the area or policymakers. Interviewing policymakers was an effective way of capturing the broader business and policy impacts.

Transcripts and/or notes from these interviews are analysed, so that evaluators can identify common themes and patterns in the data and cross-reference these findings with other methods. They can be particularly helpful in getting a deeper understanding of:

  • how particular groups and individuals conceptualised the goals of the UK City and Capital of Culture programmes
  • how they experienced aspects of planning and delivery and their reflections on the successes and the challenges of the programme
  • potential legacy impacts and activities that might follow in years to come.

For Hull 2017, stakeholder interviews were particularly crucial in informing the evaluation.

At various stages before, during and after Hull 2017 City of Culture, members of the evaluation team conducted interviews with:

  • representatives of the cultural sector, core creative teams from projects, and artists commissioned to make work in 2017
  • members of an independent arts and cultural expert panel
  • Hull UK City of Culture 2017 Ltd and Absolutely Cultured staff
  • key stakeholders including representatives from the city’s cultural sector, Hull City Council and Hull Culture and Leisure (HCAL)

This allowed the evaluation to pull out key learning related to broader strategic policy and legacy issues that might not otherwise have been captured. This included how the artistic programme engaged across different communities, the success of governance and delivery partnerships, capacity-building for the city’s cultural sector, and the role of partners in the strategic management of legacy.

 

Analysing existing datasets

The evaluations of UK Cities and Capitals of Culture all included primary research, which involves gathering or generating data that has not been collected before, for example, through interviews, surveys or observations. However, it is also effective to include secondary research methods, such as the analysis of data that has already been collected or generated by somebody else, including local agencies and organisations in the local area.

UK Cities and Capitals of Culture evaluations drew insights from a range of different sources including local and regional councils, national government (for example, through census data), tourist boards and agencies, national and international data sets on tourism, employment, job creation and businesses, and public transport providers. This type of data provides useful context for benchmarking and comparing data from primary research methods.

The Coventry 2021 evaluation analysed data that was produced by the trust as part of their monitoring processes. This included data that was collected annually, such as equality and diversity data on staff and trustees, as well as data that was collected quarterly, such as number of hours of engagement activity undertaken, stakeholder mapping and geographic location of stakeholders. Data that was generated continuously throughout the year included participation data and tickets to events and activities, engagement at non-ticketed events and digital engagement, and demographic and location data on audiences and freelancers.

A prominent strand of work in the Liverpool 2008 evaluation programme was the analysis of local, regional, national and digital media coverage. They conducted a longitudinal media impact analysis that explored the change in reporting on Liverpool from 1996-2009. The study used “established quantitative techniques for media analysis” which relies on coding “objective states” relating to media clippings such as date of publication or broadcast, title and type of media source, geographic remit, length and story format as well as drawing on calculations that aim to provide a monetary figure for media coverage (known as advertising value equivalent). These statistics were combined with a qualitative analysis of media content which focused on themes and attitudes across a number of categories: city image; bringing business to Liverpool; physical developments; arts and cultural offer; social capital, inclusion and access; and management and policy (see Media Impact Analysis report and the Image and Perceptions theme on the Impact 08 web page).

 

As well as analysing secondary data, the University-led evaluations (Liverpool 2008, Hull 2017 and Coventry 2021) also drew on insights from the growing body of academic and ‘grey literature’ on Cities and Capitals of Culture programmes.

Learning point

In our increasingly digitised world, there are more ways then ever to analyse digital data. Mapping of potential audience and stakeholder interests can be identified via publicly available search data. Online social media platforms can also be a useful source of data on particular events or topics. Data-pooling (combining data sets from different sources) and the sale of non-personal data are already well-developed, including in the cultural sector.

Read more about the potential uses of this kind of data to inform cultural strategies in our Making Data Work report (p.20-24).

Case studies

A case study is a good way of bringing together multiple narratives or stories of impact and demonstrate cases of significant change. Many of the evaluations used case studies in addition to reporting on key metrics and findings from primary research, to explore narratives of particular projects in more depth.

Shorter case studies were incorporated directly into reports (such as Derry-Londonderry’s 2013 final report) while others (such as Hull 2017 and Coventry 2021), took a more in depth approach.

A couple of examples are given below.

The ‘Creating the Past’ evaluation report explored the cultural programme that was inspired by heritage within Hull 2017, summarising how the programme delivered against 10 Heritage Lottery fund outcomes, which were split into three main categories: outcomes for heritage; outcomes for people; and outcomes for communities (p.4).

The full report considers these outcomes across 14 different case studies, and used a variety of research methods including surveys, interviews, focus groups and ‘walk and talk groups’ with the delivery teams, artists, peer assessors, audiences and participants and delivery partners. The Creating the Past evaluation report explores these case studies in turn and: “…focuses on both process and outcomes, measuring impact, capturing learning, and building understanding of what worked well and where improvements can be made” (p.9).

For Coventry’s evaluation, a series of focus studies were produced by different university teams with a number of reports and events produced.

For example, the Integrating the Environment focus study report, webinar and series of events explored how successfully the environmental theme had been integrated in Coventry 2021 programme and was carried out by Coventry University’s Centre for Business in Society and Centre for Agroecology, Water and Resilience research centres. The team undertook a number of activities, such as:

  • analysing data from key documents provided by the Trust
  • conducting interviews with those planning and delivering the programme, external partners and members of the public
  • analysing social media outputs such as social media posts and blogs, to assess levels of engagement with environmental events and activities
  • capturing participants’ responses across activities such as “walking and talking methodologies to capture their stories, perspectives and experiences digitally” through audio recordings, pictures and videos, and by taking notes of observations and discussions (Integrating the Environment report, p.8).

Members of the public were also asked to “collate and discuss pictures of actions taken to minimise their impact on the environment, green spaces in the City that they considered important and their participation in Coventry City of Culture” (Integrating the Environment report, p.8).

Creative research methods

Creative research methods specifically employ arts-based techniques or creative tools to capture, generate, analyse and share data and findings from evaluations, often within a mixed-methods framing.

A project developing the use of ‘headphone verbatim theatre’ as an evaluation tool was carried out by a researcher from the University of Coventry to investigate citizen civic pride in Coventry 2021.

It addressed the challenge that “policy solutions are too often based on stakeholder opinion, and rarely are the voices and experiences of citizens used to evidence policy-making” (Verbatim Theatre report, p.2).

The method involves creating a “live performance created solely from the words spoken by interview participants” and is designed to help policymakers to incorporate “citizen experience in policy design and evaluation” with the aim of offering “an effective and emotive connection between citizens and policymakers” (p.1). The method involves capturing the audio interviews (although details of how the citizens were sampled is not given), and then editing and building the narrative, creating an ‘audio script’ which is then rehearsed with and performed by actors wearing headphones, as they listen to the words and then repeat them in performance as exactly as possible.

The project interviewed 10 citizens and was performed in November 2021 “to a live audience of evaluation professionals, academics and those who worked with the City of Culture 21 delivery team” (p.4). A Q&A session followed where audience members described the performance as “emotional and engaging” and as a “powerful” evaluation technique.

 

Our How to guide by Martin Glynn on sharing research and evaluation findings through performance provides a step-by-step guide on performing data within research projects.

The Arts and Homelessness in Coventry report (2022) explores the key findings from two programme initiatives: HOME festival and Legislative Theatre project. This three year study used a mixed methods approach combining participation observation, interviews, vox pops, photo elicitation and “creative methods of documenting events including diary entries and photography. It evaluates “the co-production methodologies [the projects] employed, and explores how multiple stakeholders narrate and understand their participation” (p.1).

The programme combined arts engagement activity and research to develop “a substantial data set of interviews, ethnography and images” which “tells the story of the profound impact that arts and creativity can have on the lives of people who are or have been homeless alongside the tangible impact that arts interventions have had on the capacity of civil society to co-produce policy with people who have this lived experience” (p.24).

 

If we are not careful, our evaluations can sometimes serve to conceal, excuse and perpetuate inequities within the cultural sector by paying attention only to the same dominant voices and experiences. Creative methods can be an effective way to platform the experiences and viewpoints of individuals and communities who might otherwise be excluded from more traditional surveys and stakeholder interviews.

Creative outputs such as performances and video design can be effective communication tools to share insight and learning with different audiences and stakeholders, but creative methods can be used in all stages of evaluation. They can be useful, for example, to explore how participants understand their own participation, as well as involving differing, evolving and even contested types of value, to gain better insights. This can be an especially valuable way to make sure evaluations are both empathetic and socially-engaged.

For more information on empathetic and socially-engaged evaluation approaches, explore our Evaluation Principles.

Communicating findings

“[N]umbers can tell many stories so the challenge in undertaking evaluation is to decide when a story is best told in numbers and when it needs a different approach.”

Katy Raines and Jonothan Neelands, Evaluating Large-Scale Cultural Events (2023)

There are multiple audiences and stakeholders for UK Cities and Capital of Culture evaluations, who inevitably have different vested interests. It is important to consider how, and in what formats, the findings and insights from these activities were communicated to their stakeholders. Every city took a different approach to publishing their findings. This was partly due to budgets, timelines and researcher expertise, but also what was deemed most valuable in terms of the narrative around impact.

Reports

As the first event of its kind in 1990, Glasgow produced one monitoring report that is still accessible today. This report aimed “to provide an independent appraisal of the effects of Glasgow 1990” (Monitoring report, p.1).

Derry-Londonderry, the first UK City of Culture in 2013, also produced one final report in 2016, although a (draft) monitoring report authored by Ilex from October 2013 is also publicly available. Their final report, described as a post-project evaluation, was made public on the Future Trends pages for Coventry. Its aim was to “examine the extent to which the objectives of the City of Culture project were achieved and what have been the lessons learned” (Post-project evaluation, p.4). Both Derry-Londonderry and Glasgow pulled together all the different strands of evaluation activity into one report.

In contrast, Liverpool 2008 and Coventry 2021 produced a large number of reports, all of which are accessible on searchable web pages hosted by the respective University consortia. For Liverpool, the aim was to produce a “legacy of a replicable research framework, which can be used to explore the impacts of culture-led regeneration programmes beyond Liverpool and 2008.” (Impact report, p.5). The Evaluating Coventry 2021 UK City of Culture website also “reflects a variety of UK CoC 2021 activities, funded by different funding bodies” and was created as part of an Arts and Humanities Research Council (AHRC) funded project called ‘City Change Through Culture: Securing the Place Legacy of Coventry City of Culture 2021’.

Hull adopted an approach which was somewhere in between. They produced several reports, including a revised version of the final report, which is available and archived on the University of Hull website. These reports pull together findings from the different impact areas, and make reference to a number of unpublished reports, academic and grey literature.

Creative communication methods

Often the outputs from all evaluation methods, including creative research methods, included some degree of visual or audio material, alongside written reports. The use of photography in reports was fairly common, as was the production of video content to document the events alongside written reports. Infographics were commonly used to synthesise key statistics to make them more visually appealing.

Promotional videos can be a powerful way of communicating particular narratives of impact and success and evoking emotion while capturing a strong sense of place. Derry-Londonderry 2013’s ‘What A Year’ video combines imagery and video footage of the city of culture, and uses a soundtrack of music from artists “who emerged” during the year.

What a year’ – Derry-Londonderry video (YouTube)

Voicing Hull

An example of a written creative evaluation output was the Voicing Hull project. Poet and academic Kate Fox combined an ethnographic approach (interviews and observations) and vox pops with ‘poetic inquiry’, to capture perceptions of Hull 2017 amongst Hull residents, local school and university staff and students, volunteers, community groups and creative workers.

She carried out thematic analysis to identify common threads and recurring images, and used creative methods to generate further metaphors and similes directly from the respondents which “she knitted” “into poems which otherwise use the words of respondents verbatim” (Hull 2017 Preliminary outcomes evaluation, p.20).

The poems were described in the preliminary outcomes evaluation as “beautiful works of art”, and also as “an additional layer of evaluation evidence, conveying the spirit of numerous individual stories of the impact that UK City of Culture has had on Hull and its people” (p.20).

You can read the poems in the Voicing Hull publication.

 

Coventry 2021 monitoring and evaluation team developed an animation to communicate their reflections on their experiences of evaluating the City of Culture “to make them available for others to consider when monitoring and evaluating large-scale events”.

Reflections on Evaluating the UK City of Culture – Coventry animation (Coventry 2021 evaluation website – includes transcript)

Events and webinars

Events were an effective way of communicating the findings of evaluations. It was common to hold launch events when the final report was published, bringing together key stakeholders to explore the findings and reflect on future legacies of activity. Events can also be an effective way for cultural sector practitioners who are planning to hold similar events to hear from and learn about the processes and impacts from these programmes, as well as raise the profile of the evaluation work, and wider programme.

Liverpool and Coventry also held a number of online webinars, some of which are still available to watch; such as the video of Coventry’s Walking Through Data digital exhibition, or the report and agenda from Coventry’s Policy and Evaluation summit held in June 2021.

These teams also published articles and delivered talks, giving insights throughout the evaluation processes. Sometimes this involved republishing existing content or recordings and presentations.

Sharing data

Including people’s own words in reporting and making source data available is another way of providing opportunities for further interrogation of data to inform future learning. Coventry 2021 have made particular progress in this area. The source data that informs the evaluation is publicly available (where possible) through the Coventry City Council Insights Team GitHub repository.

This could help to inform future research and evaluation that might bring in new perspectives and learnings on both process and impacts, and is crucial for future learning for events and cultural programmes of this type, and the cultural sector more broadly. By revisiting existing data, through meta-reviews and analysis of secondary data, teams can pull out perspectives that may have been missed.

Listen to audio

In this audio clip, Mark Scott, Research Fellow at Warwick Business School reflects on how the Coventry 2021 evaluation team prioritised sharing publicly the data from the evaluation.

Download transcript

Reflecting on the Evaluation Principles in Practice: Rigorous

“Answering questions like: Was the year a success? What have been its impacts? What legacy has been built? What worked and why? What didn’t work and why? And was it value for money? in often strongly politicised and charged environments, presupposes a substantial infrastructure of (open) data, evidence, and comment to support assessment and judgement.”

‘Cities of Culture: A Model of Evaluation’ by Professor Nick Henry, from the Centre for Creative Economies at Coventry University

The Evaluation Principles are a set of co-created principles that can be used to inform and guide evaluation practice. One of the principles, rigorous, directly relates to the examples we have looked at so far in relation to who evaluates UK Cities and Capitals of Culture.

Rigorous

Taking a robust approach to evaluation involves the appropriate and rigorous application of different methods, to not just find out ‘what’ happened, but understand in more depth ‘why’ and ‘how’.

You might find it useful to think about…

  • How did Cities of Culture demonstrate rigour through their methods and methodologies?
  • How did they decide which methods were most appropriate?
  • How did they make sure their methods could be replicated by others, for example, later Cities and Capitals of Culture?

Find out more about the rigorous principle in our Evaluation Principles.

Summary

UK Cities and Capitals of Culture programmes are ambitious and large in scale, and evaluation requirements are complex. Evaluation teams need to establish baseline levels of data before events, meet monitoring and reporting requirements during the events, and pull together learning and insight across multiple outputs and for multiple audiences after the events.

Because of their complexity, there’s no single, ideal way to evaluate. They often require the use of a range of different methods and evaluation frameworks to be able to capture the full picture and meet the requirements of different stakeholders. Whilst all methods have benefits and challenges, there is some really valuable learning the cultural sector can benefit from to help inform their own practices.

The evaluations have also highlighted that it’s really important that we consider what the learning is beyond the final impact of the event. There are a number of examples in this resource of how evaluation teams have chosen to include key learnings for example, on the processes of delivering the event itself, as well as identifying areas for embedding further learning in legacy activity. Of course, embedding quality learning and evaluation methods into a city’s organisational and institutional frameworks for the future is not straightforward, and remains a significant challenge.

This resource has explored ‘how’ the evaluations for UK Cities and Capitals of Culture were carried out, including the timelines of activity, evaluation frameworks and methodologies and methods used, and how they communicated findings. Take a look at ‘People and processes: who’s behind evaluations of UK Cities and Capitals of Culture? for more detail on the teams behind this work.

Glossary

Advertising value equivalency (AVE) A calculation that estimates the value of media coverage by comparing it to the cost of an advertisement of similar size and significance. It is often used to provide a monetary figure to the value of communications, press and PR activity.
Baseline An initial measurement or assessment which takes place before a particular event or project takes place, with the aim of providing a reference point for assessing changes and impact
Formative evaluation An evaluation that takes place before or during the timeline of a project with the aim of feeding back insight to delivery teams and/or into delivery itself.
Longitudinal study Longitudinal research typically takes place over an extended time period, rather than studying one instance or event. In the context of cultural engagement, this research explores the “extended” or “cumulative” impact as defined by Carnwath and Brown (2014). However, no strict or consistent criteria is used to define longitudinal research. Researchers can describe their research approach as ‘longitudinal’ if their fieldwork takes place across a series of months, to differentiate these studies from research projects with shorter fieldwork timelines.
Mega-events A term that is often used to describe special events of a very large scale (in terms of budgeting, audiences and participants) that take place infrequently and which often have national or international media reach. The Olympic Games, FIFA World Cup or Universal Expo are classic examples of mega-events. Since 2015, the term ‘cultural mega-event’ is increasingly used to describe one-off cultural events such as European Capitals of Culture or national cities of culture programmes, such as the UK City of Culture programmes explored in this resource. However it remains a contested term.
Methodology A term used to describe an approach to researching phenomena. It includes our research design and what methods we’re using, but also how these relate to the object of study, and how we might analyse data generated by our research activities.
Methods Research methods are tools to capture or generate data, such as surveys, observation, interviews or focus groups.
Metrics Metrics are a set of numbers used for comparing, monitoring and tracking a particular process or activity. In the cultural sector, metrics can be anything from number of ticket sales for a performance, to visitor dwell time at an exhibit, to perceived levels of satisfaction with a cultural experience.
Mixed methods research This type of research uses more than one method within a research study’s design which can be quantitative or qualitative in nature. They’re often combined to provide a more complete understanding, or to generate a different understanding, of a phenomenon.
Primary research methods Methods used to generate or capture data that has not been generated or captured before, such as interviews or surveys.
Qualitative research A type of research that aims to provide a deeper understanding into the ‘how’ and ‘why’ of a particular phenomenon, within its original setting or context. This can often mean generating and analysing non-numerical data that’s rich in detail, typically using in-depth interviews, focus groups or observations.
Quantitative research A type of research that aims to find correlations or test hypotheses about a particular phenomenon by describing, predicting or controlling for particular variables of interest, then giving it a numerical value. By doing so, the researcher can test the causal relationships between variables, make predictions and generalise results to wider populations through statistical analysis.
Return on investment (ROI) A way of measuring the worth of an investment. It can be calculated by dividing profit/loss by its cost to determine how profitable something is, but it can also be used to refer to non-monetary value, such as social return on investment.
Secondary research methods Methods that involve synthesising or analysing existing data. This data could be produced internally (e.g. the monitoring data produced by City of Culture trusts) or externally (e.g. population data from a census).

References

Glasgow 1990

Derry-Londonderry 2013

Hull 2017

Liverpool 2008

Coventry 2021

All links are correct at time of publishing. If you spot a broken link, please let us know.



You may also like


Musician wearing a coat plays trumpet on a pebbly beach with the Humber Bridge in the background.

People and processes: who’s behind evaluations of UK Cities and Capitals of Culture?

At the start of any evaluation project it’s important to recognise the skills and teams you will need. In this rapid research review, we take a close look at who is involved in the complex, large-scale evaluations of UK Cities and Capitals of Culture.

How were teams structured, who did they collaborate with and how did they approach the challenge of objectivity whilst balancing the needs of multiple stakeholders?

 
A person wearning headphones and holding up a mobile phone, in an ornate building. They are listening to an audio tour.

UK Cities and Capitals of Culture evaluation reports: a quick guide

Are you interested in the evaluations of large-scale projects, and want to delve into the detail of specific events or topics?

This quick reference guide signposts the key evaluation reports from recent UK Cities and Capitals of Culture. It will be of interest for academics, researchers and those exploring the evaluations of place-based projects in more depth.

 
Diagram of the wheel of change framework showing an outer circle and inner circle. Outer circle is four connecting arrows in green, purple, red and blue, containing the words 'OUTPUTS' 'ACTIVITIES' 'INVESTMENTS' 'OUTCOMES'. Inner circle is orange, containing the words 'AMBITIONS AND IMPACTS

Evaluation for change: A guide to planning a mixed-methods framework for evaluation

Are you looking for a fresh approach to evaluation that is flexible and can be tailored to the scale and scope of your work? In this guide, discover more about the Wheel of Change evaluation framework, explaining the process of moving from defining your ambition and desired impact to actual results. The guide includes examples ... Read more

 
Published: 2024
Resource type:


Creative Commons Licence Except where noted and excluding company and organisation logos this work is shared under a Creative Commons Attribution 4.0 (CC BY 4.0) Licence





 
 

Esmee Fairbairn Foundation

The Evaluation Learning Space is supported by the Esmée Fairbairn Foundation and led by the Centre for Cultural Value in partnership with CultureHive, the Arts Marketing Association's knowledge hub.



Interested in evaluation? Join the Cultural Evaluation Network on LinkedIn.LinkedIn Group