Art Fund’s AI policy: why, how and what

Art Fund’s AI policy: why, how and what

By Art Fund

SUMMARY

Mike Keating, Art Fund's Associate Director - Digital Experience, spoke at AMA's Digital Marketing Day 2024 about how he and his team developed an AI Policy for their organisation. Mike takes us through his session slides about why they created the AI policy, how they did it, what they included and what they learned along the way.

Overview

  • Why we created an AI policy
  • How we created an AI policy
  • What the policy says (and doesn't say)
  • AI governance & management
  • What we learned and how to start yours

Why?

  • Included an AI question in a digital skills project
  • Learned 1/3 staff were using it
  • Provide a framework for safe usage of AI at Art Fund
  • Protect the organisation from risk
  • Help decide "what to AI"
  • Provide skills for people who aren't engaged

How?

  • SMT instructed a small (3-person) team to create a policy
  • Team agreed on a principles-based approach
  • Ran a facilitated SMT session to establish red lines and areas of focus
  • Principles were accepted by SMT, but the principles-as-policy approach was not
  • Smaller team worked on a rules-based approach
  • Looked at digital skills survey response data relating to AI
  • Drafted rules based on the principles and staff use cases
  • Evaluated tools that could support the use cases

Rules

You can:

  • ✓ You can use AI to create draft copy, images and audio
  • ✓ You can use AI to generate ideas
  • ✓ You must fact-check and sense-check AI-generated outputs

You cannot:

  • ✗ You cannot publish AI-only content directly
  • ✗ You can never publish AI-generated image or video
  • ✗ You can't delegate a business decision to AI
  • ✗ You can't upload sensitive data to a trainable AI
  • ✗ You can't integrate your Art Fund account with an unapproved AI tool

Important note:


Examples

You can use AI to:

  • Help you with background research
  • Summarise long pieces of content or reports you're reading
  • Generate ideas, drafts or structures for written work
  • Get feedback on something you've written
  • Analyse data (with permission)

You can't:

  • Analyse any sensitive data in a free AI service
  • Send an AI to a meeting in your place
  • Edit images of artworks
  • Pretend work generated solely by AI is yours
  • Use AI in any content without sense or fact-checking it
  • Write grant applications

Examples of more detailed use cases:

I'd like to use AI to summarise funding applications

AI can be used to summarise large chunks of text, so you could use it to summarise text from funding applications. However, an application may include personal data or very confidential information. If it does, you must remove the personal data before you input it into any AI system. Applications may also include business sensitive information. Where this is concerned, provided there is no personal data, you can use Art Fund's paid Chat-GPT account because Chat-GPT isn't training on that data. Contact Mike for more information. Bear in mind that AI's not good at summarising financial information and it can't count, so don't use it for this part of the task.

I'd like to use AI to identify trends in our membership

You can't upload personal data to any AI tool. This breaks GDPR if you don't have someone's consent to do so, and we don't. However, you can analyse some information (without personal data) in Art Fund's paid Chat-GPT because it's not training on that data. Contact Mike for more information.


Tools

Here is a list of tools that have been approved for use at Art Fund. The AI working group meet every 3 months to review these tools and consider new use cases.

  • Chat-GPT, Gemini or Claude can all be used for range of learning, research, and draft copy or image generation purposes
  • Microsoft Copilot can be used for a variety of tasks related to the tools it's integrated with, when it's activated
  • Other tools including Bibli (Bynder), Canva and Adobe come with AI features built-in. These are fine to use as long as they're in line with the rest of our rules
  • Art Fund's paid Chat-GPT is the only tool to be used for data analysis. Please contact Mike for more information

Learning

  • The community of practice is intended for people to share useful and not-so-useful ways they've used AI in their roles, so we can all learn from one another
  • You can join the AI Teams channel by following this link
  • You can rewatch our recent training sessions using the links below:
    • Intro to AI
    • AI for creative work
    • AI for data & research

Governance

  • The AI working group will meet every 3 months to review how staff are using AI at Art Fund, what new tools are available, and whether the rules are fit for purpose
  • The members of the AI working group are: Sarah, Vaish, Alex, Louise, Yvonne, Peter, George and Mike

Suppliers

Many companies we work with will also be using AI. We should understand if and how to protect us from risk.

  • You should ask any new suppliers you're engaging if they use AI, and how
  • You should also ask any existing suppliers when you're engaging them on new projects
  • This is so we can understand if they're using AI in a way that doesn't align with our values or exposes us to new risks
  • This is especially important for projects using customer data and creative projects

AI principles

Principle 1

Understand what generative AI is and what its limitations are before you use it

Generative AI is a form of AI that takes what you've given it and uses it to create something new based on what you've asked for. This usually involves you writing a prompt and it generating copy, images, audio or video. This could save you time, especially for tasks like first drafts of presentations or copy.

However, AI tools have limitations, and their output can be limited to the quality of the prompt you give it.

AI tools use the data they've been trained on to guess what the best answer to your prompt is. This can mean responses are factually incorrect, or sometimes completely made up (called hallucinations). They can also be affected by knowledge cut-offs and biases. LLMs also lack personal experience, real-world context and emotions.

For these reasons, AI outputs should be sense-checked and fact-checked before they're used.

Read more

  1. A generative AI intro explainer (5 -1o minute read)
  2. A video explainer that covers what a Large Language Model is and how they work (22 mins)

Principles

  • Understand what gen AI is and its limitations
  • Use the right tools for the job
  • Fact-check and sense-check your AI outputs
  • Use generative AI lawfully, ethically and responsibly
  • Use generative AI tools safely and securely
  • You share what you've learned
  • Know how your suppliers use AI

Governance

  • A cross-org AI working group reviews the policy every 3 months
  • The group involves people at different levels, in different teams, with different opinions about AI
  • The group propose changes to recommended tools and to use cases, based on staff feedback
  • The group also discuss ethical and moral challenges
  • The updated policy is then reviewed by Legal and our SMT
  • We plan to run training with each new versions of the policy
  • Training is a mix of intro sessions and ones aligned with practical use cases based on staff behaviour
  • It also needs to align with your other policies, like sustainability, EDI and procurement

What we learned

  • Provide clear direction to staff
  • Cater for people who aren't using it
  • Use easy-to-understand examples
  • Get IT on board
  • Tie it in with your existing workplace policies
  • Demonstrate its value
  • Make it fun

Writing your own policy

  • Ask an AI
  • Start simple
  • Accept your fate
  • Decide what you're using AI for
  • Agree your red lines
  • Find out who's using it, and why, and include them

Art Fund's AI Policy presentation slides (PDF)


If you would like to see a copy of Art Fund's full AI Policy, please contact Mike directly: mkeating@artfund.org


Resource type: Guide/tools | Published: 2024