Artificial Intelligence (AI) Example AI Policy

Artificial Intelligence (AI) Example AI Policy

By Arts Marketing Association (AMA)

SUMMARY

This policy provides a governance framework and practical guidance for the use of artificial intelligence (AI) in cultural organisations. It aims to ensure responsible, ethical, and effective deployment of AI technologies while preserving our cultural values and mission. It can be used as a starting point for your organisation and then amended to fit your specific context. Produced by the Arts Marketing Association and Target Internet in development with the AI Sector Support Group. Version 2 as at November 2024.

Introduction

This is version 2 of a policy created to support cultural organisations across the UK. Version 1 was written in July 2024 as AI use increased across the sector workforce. We recognised that support at a governance level was needed to ensure that AI use is employed effectively and with considered guidelines. Having shared version 1 with the sector, received feedback and seen an increase in policy examples we have developed  version 2 (November 2024). 

We have kept this policy at a governance level but have added in questions that you might want to ask to lead towards the practical guidance that teams need. 

Please use this policy as a starting point for your organisation and amend it to fit your specific context. We continue to gather working AI policies from across the cultural sector to see how these are taking shape in practice. Please share your  policy with cath@a-m-a.co.uk.  

This policy has been produced by the Arts Marketing Association and Target Internet in development with the AI Sector Support Group. The organisations involved are; Arts Marketing Association; AIM — Association of Independent Museums; Black Lives in Music; Clore Leadership; Family Arts Campaign; Future Arts Centres; Independent Theatre Council; Jocelyn Burnham (aiforculture); Kids in Museums; Museums Association; Music Mark; One Dance UK; OutdoorArtsUK; The Audience Agency; The Space; UK Theatre.

Please note that version 1 of this policy was Informed by Codelabs Sample AI integration and experimentation policy - https://codehopelabs.com/sample-llm-policy , Cambridge University Generative AI Guidelines, BBC Generative AI Guidelines, Civil Service Guidance on Generative AI, National Lottery Heritage Fund – Digital Heritage Leadership Briefing on AI, Claude AI. 

Version 2 of this policy is informed by feedback from sector professionals and further research into sample policies emerging across the cultural and commercial sectors.  


Using this policy

This policy has been designed as a jumping off point for Boards/CEOs and their teams. You will want to make it relevant to your organisation considering your purpose, stakeholders and context. 

You may want to set up an AI working group to develop this policy and keep things moving as you integrate AI into your team’s activity.  

You may want to consider where AI is already being used and start by addressing the policy areas that most impact this existing activity.  

Our draft policy has 14 policy areas for you to consider. We’ve minimised crossover in these where possible but things like team training are relevant in a number of areas.  

During the process you are likely to want to consider what AI technologies you want the policy to cover. This could include: 

  • Content generation tools 
  • Collections management systems 
  • Visitor experience technologies 
  • Administrative and operational systems 
  • Marketing and communication tools 

Core policy areas to consider

1. Responsible experimentation

Governance principle: 

We encourage staff to conduct responsible experiments aligned with our mission and values. 

Key questions to consider: 

  • How do we know the proposed AI use advances our mission? 
  • What risks need to be assessed before going ahead? 
  • Who needs to be consulted before AI tools can be used? 
  • When can team members go ahead without consulting others? 
  • Do we want to record all AI experiments? How? 
  • What success metrics do we want to establish?  

2. Data protection and privacy

Governance principle: 

When using AI systems that process personal data, we will comply with all relevant data protection legislation, including the UK GDPR. We will be transparent about our use of personal data in AI and obtain explicit consent where required. Sensitive, confidential, or personal information should not be input into third-party AI systems without appropriate data protection safeguards. 

Key questions to consider: 

  • What data will the AI system process? 
  • Where will data be stored and processed? 
  • What consent will we need? How will we get it? 
  • When might we need a data protection assessment? 
  • What data are we confident to put into different AI systems e.g. does it change if the system is paid for and has settings to keep all information within the temporary chat? 

3. Intellectual property and cultural ownership

Governance principle: 

We respect the intellectual property rights of artists, creators, and communities whose works or cultural heritage may be used in AI training data or outputs. We will obtain necessary permissions and give appropriate attribution where it is within the scope of our control. We will carefully consider the cultural and ethical implications of using AI in relation to objects or knowledge of cultural significance. 

Key questions to consider: 

  • How/when do we assess if our AI use impacts culturally sensitive material? 
  • When do we need to consult relevant community stakeholders? 
  • How do we ensure appropriate attribution? 
  • What safeguards are needed? 

4. Human oversight and editorial control

Governance principle: 

While we may use AI tools to generate ideas, content, or analysis, we will not publish or act on AI outputs without human review and editorial control. All AI-generated content will be fact-checked and edited by staff to ensure accuracy, alignment with our brand voice, and adherence to our institutional values before publication. 

Key questions to consider: 

  • Who is responsible for reviewing AI outputs? 
  • What quality criteria should be applied? 
  • How do we document the review process? 
  • What escalation procedures are needed? 

5. Transparency and Accountability

Governance principle: 

We will be transparent about our use of AI systems both internally and externally. When publishing AI-generated content or using AI in visitor-facing applications, we will clearly label it as such. We will maintain audit trails of our AI usage and establish clear lines of accountability, including nominating senior responsible owners for AI projects. 

Key questions to consider: 

  • What can we reasonably audit? 
  • How far back do we go in the audit trail? 
  • Where is the line on what we label as AI generated? 
  • What if AI was used to generate ideas as one part of the process, do we label that? 

6. Resource Management

Governance principle: 

AI deployment must be resourced appropriately and sustainably. 

Key questions to consider: 

  • What budget is required? 
  • What staff training is needed? 
  • How do we measure ROI? 

7. Monitoring bias and fairness

Governance principle: 

We recognise that AI systems can perpetuate or amplify biases present in training data and design. We will take proactive steps to identify and mitigate biases using human oversight. Where we develop our own AI systems we will endeavour to use training data that reduces the risk of bias. We will be alert to the risk of AI-generated content creating a misleading or unbalanced interpretation of art, history, or culture. 

Key questions to consider: 

  • What human checks can we put in place to consider bias? 
  • How far can we impact this with the tools and platforms we choose? 
  • Do we need to consider this in content coming in from external sources?

8. Social impact and job displacement

Governance principle: 

We will strive to use AI in ways that promote cultural understanding, inclusion, and accessibility. We will be mindful of the potential impact of automation on our workforce and commit to supporting staff in developing the skills needed to work effectively with AI. 

Key questions to consider: 

  • Do we need an AI working group? 
  • How do we plan the development of skills across the team? 

9. Environmental Sustainability

Governance principle: 

Recognising the potentially significant environmental footprint of AI, we will aim to use AI efficiently and avoid unnecessary computational waste. We will give preference to AI providers with strong environmental credentials and sustainable practices. 

Key questions to consider: 

  • How do we assess which AI providers to work with? 
  • How can we measure any additional environmental footprint? 
  • How do we balance taking up the opportunities of AI with our environmental impact policy? 

10. Stakeholder engagement and ethical review

Governance principle: 

We will proactively engage with our audiences, local communities, cultural stakeholders, academic experts, and policymakers to inform our approach to AI. Where appropriate, we will establish an ethical review process to assess AI projects and ensure they align with our values and legal obligations. 

Key questions to consider: 

  • Which parts of our AI use should be informed by stakeholders? 
  • Who needs to input into designing the ethical review process to reduce bias? 
  • Who needs to be involved implementing in the ethical review process? 

11. Artistic freedom and human creativity

Governance principle: 

While recognising the creative potential of AI, this policy affirms the enduring importance of human creativity and artistic expression. We commit to using AI to complement and enhance human creativity, not replace it. 

Key questions to consider: 

  • How will we use AI in our creative/artistic work? 
  • How will we do this with our partners/artists? 
  • Where do we draw the line on what enhances human creativity and what replaces it? 

12. Collaboration and knowledge sharing

Governance principle: 

Given the complex challenges posed by AI, we encourage collaboration and knowledge sharing with other arts and cultural organisations, academic institutions, tech providers, policymakers, and civil society. This includes participating in sector-wide initiatives to develop AI ethics guidelines, share best practices, and advocate for responsible AI policies. 

Key questions to consider: 

  • Are there existing stakeholders/partners/communities of practice that we should be engaging with? 
  • Who will lead on this work and how will they manage the additional workload? 

13. Quality assurance of AI-generated content

Governance principle: 

We are committed to ensuring that any content created with generative AI is of the highest quality. This includes: 

  • Rigorous Review Process: All AI-generated content will undergo a stringent review and quality assurance process to ensure it meets our standards for accuracy, relevance, and artistic integrity. 
  • Alignment with Organisational Values: Content produced using AI must align with our organisation's values, mission, and strategic goals. 
  • Continuous Improvement: We will regularly assess and refine our AI systems and processes to maintain and enhance the quality of AI-generated content. 
  • Training and Guidelines: Staff will be trained on best practices for using AI tools to create high-quality content, and clear guidelines will be established to support this goal. 

Key questions to consider: 

  • Who is responsible for developing and implementing e.g. quality assurance process? 
  • How do we check our AI work aligns with our strategic priorities? 

14. Use of AI in Recruitment

Governance principle: 

We are committed to using AI in recruitment processes responsibly and fairly: 

  • Bias Mitigation: We will implement measures to identify and mitigate biases in AI-driven recruitment tools to ensure fair and equitable hiring practices. 
  • Human Oversight: AI tools will assist in the recruitment process, but final hiring decisions will be made by human recruiters to ensure a holistic evaluation of candidates. 
  • Transparency: Candidates will be informed if AI tools are being used in the recruitment process, and we will provide explanations of how these tools influence decision-making. 

Key questions to consider: 

  • Are there other organisations already doing this? 
  • Do we need to consider our stance on applicants using AI to write their applications? 

Additional Things to Consider 

Roles and Responsibilities

You may want to outline these. For example:

  • Board: Overall governance oversight 
  • Senior Management: Strategic implementation 
  • Department Heads: Operational implementation 
  • AI Champions: Day-to-day support 
  • All Staff: Responsible use 

Additional Documents

Depending on your choices in developing the policy, you may need to develop:

  • AI usage register 
  • AI Project Assessment Template 
  • Training plan 
  • Quality Assurance Checklist 
  • Data Projection Checklist 
  • Success Metrics Framework 

 

 

Browse by smart tags
AI Artificial intelligence Policy
Resource type: Guide/tools | Published: 2024