Artificial Intelligence (AI) Example AI Policy

Artificial Intelligence (AI) Example AI Policy

By Arts Marketing Association (AMA)

SUMMARY

An example of an AI policy created to support cultural organisations across the UK.  It can be used as a starting point for your organisation and then amended to fit your specific context. Produced by the Arts Marketing Association and Target Internet in development with the AI Sector Support Group.

Introduction

This policy has been created to support cultural organisations across the UK. As AI use increases the sector workforce needs support at a governance level to ensure that AI use is employed effectively and with considered guidelines. Please use this policy as a starting point for your organisation and amend it to fit your specific context. We would like to gather final AI policies from across the cultural sector to see how these are working in practice. Please share your final policy with cath@a-m-a.co.uk.

This policy has been produced by the Arts Marketing Association and Target Internet in development with the AI Sector Support Group. The organisations involved are; Arts Marketing Association; AIM — Association of Independent Museums; Black Lives in Music; Clore Leadership; Family Arts Campaign; Future Arts Centres; Independent Theatre Council; Jocelyn Burnham (aiforculture); Kids in Museums; Museums Association; Music Mark; One Dance UK; OutdoorArtsUK; The Audience Agency; The Space; UK Theatre.

Please note that this policy was informed by Codelabs Sample AI integration and experimentation policy, Cambridge University Generative AI Guidelines, BBC Generative AI Guidelines, Civil Service Guidance on Generative AI, National Lottery Heritage Fund – Digital Heritage Leadership Briefing on AI, Claude AI.


Policy

1. Responsible Experimentation

We encourage staff to experiment with AI tools to discover innovative ways to advance our mission. Experiments should be conducted responsibly, with appropriate safeguards and oversight. All AI-related projects must be reported to management and aligned with our organisation’s values and strategic goals.

2. Data Protection and Privacy

When using AI systems that process personal data, we will comply with all relevant data protection legislation, including the UK GDPR. We will be transparent about our use of personal data in AI and obtain explicit consent where required. Sensitive, confidential, or personal information should not be input into third-party AI systems without appropriate data protection safeguards.

3. Intellectual Property and Cultural Ownership

We respect the intellectual property rights of artists, creators, and communities whose works or cultural heritage may be used in AI training data or outputs. We will obtain necessary permissions and give appropriate attribution. We will carefully consider the cultural and ethical implications of using AI in relation to objects or knowledge of cultural significance.

4. Human Oversight and Editorial Control

While we may use AI tools to generate ideas, content, or analysis, we will not publish or act on AI outputs without human review and editorial control. All AI-generated content will be fact-checked and edited by staff to ensure accuracy, alignment with our brand voice, and adherence to our institutional values before publication.

5. Transparency and Accountability

We will be transparent about our use of AI systems both internally and externally. When publishing AI-generated content or using AI in visitor-facing applications, we will clearly label it as such. We will maintain audit trails of our AI usage and establish clear lines of accountability, including nominating senior responsible owners for AI projects.

6. Monitoring Bias and Fairness

We recognise that AI systems can perpetuate or amplify biases present in training data and design. We will take proactive steps to identify and mitigate biases, including using diverse and representative training data, testing for fairness, and auditing our AI systems regularly. We will also be alert to the risk of AI-generated content creating a misleading or unbalanced interpretation of art, history, or culture.

7. Social Impact and Job Displacement

We will strive to use AI in ways that promote cultural understanding, inclusion, and accessibility. We will be mindful of the potential impact of automation on our workforce and commit to supporting staff in developing the skills needed to work effectively with AI.

8. Environmental Sustainability

Recognising the potentially significant environmental footprint of AI, we will aim to use AI efficiently and avoid unnecessary computational waste. We will give preference to AI providers with strong environmental credentials and sustainable practices.

9. Stakeholder Engagement and Ethical Review

We will proactively engage with our audiences, local communities, cultural stakeholders, academic experts, and policymakers to inform our approach to AI. Where appropriate, we will establish an ethical review process to assess AI projects and ensure they align with our values and legal obligations.

10. Training and Awareness

We will invest in training our staff, volunteers, and partners to understand the capabilities and limitations of AI, as well as the ethical and legal considerations around its use. We will foster a culture of openness, where people feel empowered to ask questions, raise concerns, and share learnings around the use of AI.

11. Artistic Freedom and Human Creativity

While recognising the creative potential of AI, this policy affirms the enduring importance of human creativity and artistic expression. We commit to using AI to complement and enhance human creativity, not replace it.

12. Collaboration and Knowledge Sharing

Given the complex challenges posed by AI, we encourage collaboration and knowledge sharing with other arts and cultural organisations, academic institutions, tech providers, policymakers, and civil society. This includes participating in sector-wide initiatives to develop AI ethics guidelines, share best practices, and advocate for responsible AI policies.

13. Quality Assurance of AI-Generated Content

We are committed to ensuring that any content created with generative AI is of the highest quality. This includes:

  • Rigorous Review Process: All AI-generated content will undergo a stringent review and quality assurance process to ensure it meets our standards for accuracy, relevance, and artistic integrity.
  • Alignment with Organisational Values: Content produced using AI must align with our organisation’s values, mission, and strategic goals.
  • Continuous Improvement: We will regularly assess and refine our AI systems and processes to maintain and enhance the quality of AI-generated content.
  • Training and Guidelines: Staff will be trained on best practices for using AI tools to create high-quality content, and clear guidelines will be established to support this goal.

14. Use of AI in Recruitment

We are committed to using AI in recruitment processes responsibly and fairly:

  • Bias Mitigation: We will implement measures to identify and mitigate biases in AI-driven recruitment tools to ensure fair and equitable hiring practices.
  • Human Oversight: AI tools will assist in the recruitment process, but final hiring decisions will be made by human recruiters to ensure a holistic evaluation of candidates.
  • Transparency: Candidates will be informed if AI tools are being used in the recruitment process, and we will provide explanations of how these tools influence decision-making.

15. Transparency in Decision-Making

We will ensure transparency in decision-making processes where AI has been used:

  • Clear Communication: We will clearly communicate when and how AI systems are involved in decision-making processes, both internally and externally.
  • Documenting Decisions: We will maintain detailed records of decisions influenced or made by AI systems, including the rationale and data inputs used by the AI.
  • Accountability: We will establish clear lines of accountability for decisions made with the assistance of AI, ensuring responsible human oversight and intervention when necessary.

Download the AI Example Policy (Word) 


 

Browse by smart tags
AI Artificial intelligence Policy
Resource type: Guide/tools | Published: 2024