This guide is for researchers, arts marketers, audience development and community engagement staff seeking to engage general audiences, to evaluate public engagement activities. After explaining what monitoring and evaluation are, it takes you step-by-step through the decisions and actions needed to build a robust evaluation strategy, and gives tools and templates on questions, and hints on how to present the evaluation report.
The first thing to do with your questionnaires is to code any questions where respondents have entered their own answers rather than ticking a box. Read through each question one at a time, that is, look at all the responses to Q1 together, all those to Q2 together and so on. You should be looking for similar responses so that you can draw-up a ‘code frame’ for the question. This allows you to add together similar responses from different people. Once you have your code frame you will give each code a number. Then you need to read each questionnaire and put the appropriate code or codes (people may have said more than one thing) by the side of the question. It is this number that you will enter into your dataset, not the verbatim comments, these should be kept separately.
The most important things in question design are:
• avoid leading questions.
• avoid biased scales.
• never ask two questions in one, you don’t know which part people have answered; for example, ‘would you say that you understood the speaker and the discussant?’
• never ask hypothetical questions, for example, ‘will you go back to work after your baby is born?’ You can ask: ‘do you plan to go back to work after your baby is born?’
• make sure measurement bands don’t overlap. So don’t ask ‘how old are you?’ with a set of tick box answers that run 15-40, 40-60, 60 and over. The answers must run: 15-39, 40-59, 60 and over, otherwise those aged 40 and 60 don’t know which box to tick.