Surveys101

Surveys are a blunt tool, but they are good for getting an idea of what a large number of people think about your project, and the impact your project has had on them.

Survey Basics

Surveys101.png

The following diagram sets out the flow-chart of things to think about and do when designing and analysing a survey.

Step 1: Assess the feasibility of a survey

Before you even decide to run a survey, ask yourself - do I have the time, money and access to the respondent population to run a survey?

Is the population big enough to warrant a survey, or am I better off with a handful of interviews?

You can calculate the size of the sample you will need to obtain a margin of error of say, 10% or less. This means that your results can be read within a margin of + or - 10%. I use this free sample calculator to find out if I have a big enough sample, or to work out the minimum number of responses I need to get a margin of error of no more than 10%.

If your population is less than 100 - e.g. a small group of 20 participants in an arts workshop - then you might want to consider doing a series of interviews, or a focus group conversation. You can still run a survey, but unless you get almost 100% response rate, you will only be using the results as descriptive or indicative - you will not be able to say the results are representative of the whole participant population. And that’s fine - as long as you qualify upfront what your results do, and do not, tell you.

Consider how you will reach your respondent population at this stage. Do you have their email addresses? Will you need to have people on the ground with iPads or clipboards, collecting email addresses or asking the survey questions on the spot?

Step 2: Conduct qualitative research to inform the survey

If you have the time and resources, we always advise conducting some qualitative research to inform the survey design. That way you can use the language of your respondent population in the survey.

For example, once I wrote a survey question for a group of year six students. I used the term “It pushed me out of my comfort zone.”

I was lucky to have the chance to talk to some students before I sent out the survey, and when I tried this sentence out on them, I was met with blank stares.

But what the students said which meant something to them was, “It was hard but worth it.” I would not normally use a phrase like that in a survey because it has two concepts in it at once - but for the students, “hard but worth it” was a single concept which worked.

Step 3: Design the sample

Some questions to ask when you design the sample (which just means, when you work out who you are going to send your survey to):

  • Am I using a sample or am I really needing a census? (You’ll want to collect close to a census if the group is small, e.g. less than 20).

  • Sample structure e.g. random, constructed - are you going to target specific participant types based on the priorities of the project, or are you going to randomly construct your sample? Typically our sample self-selects based on who answers the survey, but if you can, you should try to target groups which are typically under-represented in samples e.g. people from culturally and linguistically diverse backgrounds, people with disabilities. For these groups, set aside some resources to do interviews or directly survey people.

  • Consider the estimated demographics and size of population, which will affect how you recruit so as to get a representative sample.

  • Think about the burden already on these participants when it comes to being over-researched. Many target groups have research or survey fatigue, so think about whether you really need to survey them or if you can make it as easy as possible for them to share their feedback. Take as much of the burden on yourself as you can.

  • Recruitment method – how will you get respondents? Print out surveys handed out at an event? Collecting email addresses? Social media promotion?

  • Confidence interval – use a sample size calculator to work out your margin of error, or the “confidence” with which you can rely on your results to be representative.

  • Focus on people from the target populations you hope your project was going to change (but still sample across the spectrum).

Step 4: Design the survey

When you design the survey, you have to think about how to avoid bias as much as you can. One way I like to think about it is, what questions would I ask if I was trying to prove that my project did not work?

Click here to see an example participant survey we have prepared.

Avoiding bias

Example of reframing a statement in a survey.

Example of reframing a statement in a survey.

Some types of bias include:

  • Social desirability bias – tendency to answer in a way the respondent deems more socially acceptable than their ‘true’ answer. Can include taking unspoken social cues from the researcher

  • Cultural bias – desirability of certain responses as seen through as particular cultural lens; cultural understanding of certain terms

  • Acquiescence bias – tendency to answer yes

  • Demand characteristics – tendency to adapt behaviour because of a desire to be a ‘good’ experiment subject - attempt to work out the hypothesis, and alter behaviour or response in order to support the hypothesis

  • Question order bias – the way you order questions can affect responses e.g. if an earlier question creates an unintended context for later questions / answers to later questions are affected by what the respondent thinks would be fair given their response to the earlier question.

Question design

E

E

Things to keep in mind when designing your questions:

  • Use language that makes sense to your target sample

  • Ask for firsthand experiences

  • Ask one question at a time (e.g. don’t ask “Was it fun and enjoyable"?

  • Ash the question last in the sentence.

  • Provide memory aids.

  • Ask demographics towards the end of a survey (because this is the most personal information you ask for, so you should build up trust before you do).

Screen Shot 2019-12-03 at 3.48.50 pm.png

Response types

Often you will see surveys use the 5- or 7-point Likert scale. You know the scale - Strongly agree . Agree. Neutral. Disagree. Strongly disagree.

This is great for samples of more than 100, but if you have a small sample, we recommend sticking with a Yes/No/ N/A response scale. Otherwise you will end up in the analysis stage, simply adding together your Strongly Agree + Agree responses and so on, because you didn’t get enough responses to make meaningful comments at the five-point response level.

Try to use the same scale throughout the survey if you can. We tend to use the Yes/No/N/A, and then at the end of the survey we might add in a “score out of 10” question for an overall evaluation.

Also try to limit the use of open text fields. We tend to include one, sometimes two, open text questions in a survey. Analysis is more complex the more open text fields you have, and also they can slow down the flow of a survey. We also find that people tend to have one main thing they want to say and they will say it and then feel the pressure to re-word and repeat it in all the other open texts.

Step 5: Test the survey

Test the survey with a small group of respondents before you roll it out. That way you can check your language and make sure the survey software works.

Screen Shot 2019-12-03 at 4.05.30 pm.png

Step 6: Implement the survey

Implementing the survey does not end with sending it out to your email database. You need to check your response rate whilst the survey is in the field, and make sure you are getting enough responses from across the demographic spectrum of participants. Chase people and encourage people, offer incentives if you can, and if you are not getting a representative sample, spend some time on the phone, directly asking the survey of people from the under-represented groups.

Step 7: Analyse the results

At this stage you will be glad you included demographics which directly relate to the types of groups you might expect to have different experiences. This might be men / women / non-binary, or it might be age range, but it might also be something like level of arts participation .e.g. people who don’t participate in arts regularly compared to people who do might have different results.

Check validity

You also check the validity of your survey at this stage.

  • Compare the results against qualitative findings, or findings from other sources e.g. anecdotal feedback, participant journals.

  • Check if the same respondent has doubled up on positive and negative responses (e.g. agreed with the statement that the project was fun and then with a statement that the project was not fun).

  • Check that there is distribution across the response scale, otherwise you might have designed a survey which pushes people to consistently answer yes or no.

Screen Shot 2019-12-03 at 4.06.23 pm.png

Step 8: Report on results

Share your results with the respondents if you can. It is part of the relationship of mutual give and take you have with your participants, and it is also because the data really belongs to them as much as you.

Step 9: Further qualitative research

The survey hopefully throws up some unexpected outcomes. Make sure you have set aside a bit of time and money to conduct some interviews or conversations digging deeper into some of the unexpected outcomes, the key outcomes, or the experiences of people under-represented in the survey responses.