How London Museum’s new website is using AI to power content relationships

How London Museum’s new website is using AI to power content relationships

By Trish Thomas

SUMMARY

London Museum’s new website landed in July 2024 and also marks the launch of their new brand and another important milestone in the  journey towards the opening of their new museum at Smithfield in 2026. Their Head of Digital Innovation, Trish Thomas, takes us through their digital transformation story.

London Museum’s new website landed in July 2024 and also marks the launch of our new brand and another important milestone in the exciting journey towards the opening of our new museum at Smithfield in 2026.
But the website is just the tip of the iceberg in our digital transformation story. Over an 18 month project we have also implemented a new DAMS, a new ticket purchase path, a new CRM system and we’re part way through an online shop upgrade. This is just the Foundation Phase!

I know that many GLAM organisations face the same issues when delivering these kinds of projects; from buy-in at Exec level, through funding, staffing structures, choosing the right technical partners and then managing projects through delivery. So I’ll be sharing a series of articles here to explain how we did it, the opportunities, pitfalls and approaches. I hope these will be useful for anyone planning similar projects.

But let’s start with the website and collections online…


Defining the problem

When I started this role at the end of 2022, I realised the museum knew lots about its physical audiences but very little about its digital audiences so our project had to begin with robust research to understand who our current digital audience was, who they could be in the future and what their behaviours, motivations and blockers to engaging online are.

The concept for this website began with digital audiences research between 2022 and 2024. Across more than 4,000 participants taking part in surveys, focus groups and one to one interviews, we kept hearing three things back:

“I can’t find a way into your collections”

“I keep hitting dead ends”

“I can’t see how all this stuff is relevant to me”

So, we set out to build a website that would address these three problems. The good news is we’re a social history museum, we’re all about the stories behind objects and the London Museum’s collection is a content treasure chest. It’s large (7 million objects), it’s broad (everything from archaeology to fashion and photography) and it’s fascinating, spanning 450,000 years of London’s history and still growing today. You’ll stumble across objects that survived the Great Fire and the Great Plague as well as Tom Daley’s swimming trunks and protest banners spanning the Suffragettes to campaigners for Occupy London.


The solution

So we knew we had to focus on:

  • Creating easier ways into our collections online - ones that meant something to everyone including non-specialist audiences
  • Finding ways to signpost to other related objects and stories based on the page you’re looking at - and not just literal relationships but lateral ones too
  • Telling stories that felt relevant to non-specialist audiences that could capture their interest, sometimes unexpectedly

We realised early on that collections data would need to play a huge role in creating dynamic connections between related objects and stories. But like many of our peer museums, our collections data was patchy. Many different people over time had different ideas about what sort of information should go into object data fields, or described things in non-standard ways. Some fields had data but it had never been intended for public consumption, other fields had no data at all.

Our new topic-based taxonomy helps to create relationships between objects and stories

The challenges and how AI has helped to solve them

Digitising and standardising the data for a collection of 7 million objects was never going to happen in the 18 months delivery timeline of this project, so we knew that we had to find an alternative solution. There are three parts to our approach:

  • We’ve created a new taxonomy of 24 broad topic-based tags like Immigration & Identity and Death & Disasters – the sorts of things non-specialists might search for – but that could be mapped to the much more granular object tags in our collections to create topic-based connections.
  • A solution called Yake, along with artificial intelligence powered by OpenAI has been used to generate topical and contextual relationships between stories and objects to supplement editorially curated relationships. For example a journey that starts with a page about Stormzy can signpost users to other stories and objects related to Grime subculture, Political Activism, Immigration or even Croydon. All topics relating to Stormzy that can take you on ‘lateral’ journeys across our collections and stories — a rabbit hole of discovery.
  • We have also used the AI Natural Language Processing capabilities of IBM Watson, to extract keywords from our collections data to surface more lateral rather than literal connections that go beyond surface similarities. If you’re looking at a ‘Charles II five guinea coin minted in 1688’ - you don’t just show me other coins — you might also want to see contextually related content for example about Charles II and Guinea in West Africa and the Glorious Revolution of 1688.
OK so I'm looking at a gold coin, don't just signpost me to other gold coins...

What else we’re doing with AI

We’ve also used AI to create alt text for all of the object images in our collections online, so that for the first time ever these images are accessible to users with screen readers. With over 130,000 objects online, this was not a task we could ever have realised using humans alone. It’s not perfect of course, but OpenAI’s Whisper service is pretty amazing in how it generates descriptive text and contextualises it, applying inclusive language with intelligence.

We ran some very specific tests to see how it would cope with controversial images and difficult subject matter and it came up trumps again and again. Choosing to use AI in this way has been a somewhat controversial decision of course. Museums are academic places, fixated on accuracy and often risk appetite is low. So we weighed up the pro's and cons. 90,000+ objects images with no alt text at all or AI generated alt text for all with an acceptance that there may be some imperfections. We have chosen to apply the 'Matter-of-fact' setting from the level of descriptiveness in our alt text in order to minimise the likelihood of any supplementary or assumed details being added and stick to the practical detail. Contextual information about the objects can be found in the object descriptions and details and this is all generated by our expert humans!

I'm interested to hear from anyone else using AI to manage high volume tasks and what conversations you've had around risk in your organisations. If that's you drop me a LinkedIn message.


What next?

OK that’s Part 1 over, if you want to read more, there’s an overview of everything that’s new about the website in this blog.

In the coming weeks I'll share more about working with multi-agency teams, structuring digital teams, implementing a DAMS, ticket purchase path, CRM and SEO-focused content strategy.

We are very grateful that this project was developed with the support of Bloomberg Philanthropies’ Digital Accelerator for Arts and Culture.


Trish Thomas head and shoulders

Trish Thomas, Head of Digital Innovation, London Museum

Trish on LinkedIn

Resource type: Case studies | Published: 2024