10 Essential Survey & Experimental Design Rules
Research is 90% planning and 10% execution.
The old saying “measure twice, cut once” applies to research as well – you’ll want to make sure your survey is designed as correctly as possible to maximize your results. Here are 10 rules to help you design a survey like the pros.
1. Figure out exactly what you want to know
Many people go into a survey with only a general idea of what they want to study. But in order to make sure your data actually answer the question you want to investigate, it’s important to clearly define what you’re looking for.
Consider the research questions:
Incorrect:
How first-generation college students feel at universityCorrect:
First generation student perceptions of belonging with their peers
First-generation student perceived ability to do well in college
First-generation student cultural adjustment to college in freshman year
The point is to define your variables as clearly as possible.To create a survey or experiment without clearly establishing exactly what is being studied is like building a house while only knowing that you want three bedrooms and two bathrooms. Before you build, you need to know exactly where the bathrooms will go, what their dimensions are, how they fit with the layout, etc.
2. Beware of framing effects – avoid loaded, leading, and double-barrel questions
A loaded question is one that contains or is “loaded” with an answer that would presume someone is guilty or has a belief that is unjustifiable. Consider a question about gun violence in America:
Incorrect:
Q) How many more mass shootings can we tolerate before changing gun laws?
This question is loaded because someone who advocates personal gun use has no way to answer without presuming they’re guilty of “tolerating” mass shootings. Loaded questions are often seen with more controversial issues where one person’s moral stance is embedded in the question. Similarly, a leading question “leads” the reader on to a particular response. Try something more neutral instead:
Incorrect:
Q) Don’t you think legislation to curb gun violence in America is overdue?Correct:
Q) Should America implement new firearm legislation?
Incorrect:
Q) Do you think Congress and the Senate are doing a good job?Correct:
Q) Do you think Congress is doing a good job?
Q) Do you think the Senate is doing a good job?
3. Use established scales that are valid, reliable, and encompassing
No matter what it is you’re trying to study, there’s probably a validated and reliable scale to measure it. Usually there’s a direct scale that can be used, but in other cases a scale can be adapted. This is important because published scales tend to be valid (meaning they capture what they claim to capture), reliable (meaning they yield responses that are stable across time and situation), and encompassing, (meaning they ask enough questions to explore the different facets of each topic).
Here’s a shortlist of just a few scales that tap belonging, self-esteem, and brand perceptions in different domains.
Belonging | Self-Esteem | Brand Perceptions |
---|---|---|
Academic Fit (2012) | Collective Esteem (2015) | Brand Identification (2010) |
Social Group Belonging (2015) | Single-Item Esteem (1990) | Brand Loyalty (2011) |
Workplace Belonging (2014) | Race-Esteem (1995) | Brand Satisfaction (2007) |
Community Integration (2005) | Relational Self-Esteem (2012) | Brand Awareness (2016) |
Don’t try to re-invent the wheel. It’s already been invented and it comes in survey form.
4. Use continuous response options whenever possible
Continuous data is always more powerful than categorical data. If measuring or asking something that is continuous, always try to get data that is as granular as possible. This is one of the easiest things you can do to get powerful data, yet many survey design companies often don’t emphasize this point. Consider, for example, a correct and incorrect way to ask participants their age:
Incorrect:
Q) How old are you (years?)
a) Under 18
b) 18 – 25
c) 26 – 42
d) 43 – 55
e) 56+Correct:
Q) How old are you (years)? ______
You can always transform the continuous age variable into a categorical one after the fact anyway, so why limit ourselves by asking it as a categorical variable? The first question doesn’t allow us to differentiate whatsoever between important ages. Suddenly 19 is the same as 25 and 27 is the same as 41. This makes the data crude and much less powerful. In fact, research on survey methods shows that changing a continuous variable to a categorical variable has the equivalent of throwing away roughly 50% of the data.
Continuous response options also eliminate anchoring. Anchoring happens when participants become “anchored” or latch onto a certain number or value given to them because they lack the context to know whether that number is relatively high or low. Consider how setting the scales or “anchoring” participants to a certain value might change their perceptions about what is normal for sexual activity (and how they might then feel embarrassed).
Incorrect 1:
Q) How many sexual partners have you had in your life?
a) 1-2
b) 3-4
c) 5-6
d) 7+Incorrect 2:
Q) How many sexual partners have you had in your life?
a) 1-10
b) 11-20
c) 21-30
d) 31+Correct:
Q) How many sexual partners have you had in your life? ___________
5. Allow for a dignified “I don’t know”
Roughly 40% of people will make up an answer to avoid the embarrassment of not knowing what you’re talking about. People don’t lie intentionally. But many of us will casually nod our heads in agreement rather than stop the conversation and admit we have no idea about the issues being presented. It can be embarrassing for respondents.
The remedy? Design your response scales to explicitly include an option for “don’t know” or “not sure” about the answer. You get cleaner data, and participants get a more positive experience. Win-win.
Incorrect:
Q) What’s your opinion of California Proposition 8?
a) Strongly in favor
b) In favor
c) Against
d) Strongly againstCorrect:
Q) What’s your opinion of California Proposition 8?
a) Strongly in favor
b) In favor
c) Against
d) Strongly against
z) I don’t have an opinion/I’m not familiar with Proposition 8
6. Eliminate order effects
An order effect occurs when the question order changes how participants interpret and respond to items. Order effects are a part of almost every survey, though many people aren’t even aware of them. Consider the following questions from a survey on relationships and well-being, and their correlations:
Pair A:
(1) How many dates have you been on in the last 6 months?
(2) How satisfied are you with your life?
Correlation: .64Pair B:
(1) How satisfied are you with your life?
(2) How many dates have you been on in the last 6 months?
Correlation: .32
Switching the order of these questions approximately doubles or halves the correlation between them. This will have an enormous impact on the results. So what’s happening?
Pair A: participants are being primed to think about something relatively specific – their past relationships – that changes their interpretation of the following broad, subjective question: “I haven’t been on many dates in the last 6 months.. I guess my life isn’t that great.”
Pair B: participants are thinking globally about their lives, which does not affect their interpretation of the relatively specific, more objective question: “I have a good family, friends who are close to me, and I like my job, so I guess my life is pretty good… but I haven’t been on many dates in the last 6 months.”
The general take-away here is that a survey should flow from 1) general to specific, and 2) subjective to objective. This general rule of thumb keeps people from interpreting questions in ways that are biased by previous questions.
7. Avoid “priming” people with important social identities
Reminding people of important social identities (race/ethnicity, religion, gender, etc) will change people’s mind-set when they answer questions and could dramatically affect your results. Literally thousands of psychological studies show that priming can change how people think, feel, and behave in research studies. For example:
- Asking respondents to report their gender in advance can make women, but not men, under-perform on standardized math and science tests (Spencer et al., 1999).
- Showing men pictures of attractive women makes them more interested in purchasing status-conveying products (such as expensive cars and watches; Dubois et al., 1993).
- Religious participants thinking about God answer questions more self-consciously (Shariff et al., 2007).
- After thinking critically about their own past “choices,” Americans – but not Asians – are more likely to victim-blame others (Savani et al., 2011).
Make sure your survey doesn’t provoke a certain social-group mindset. This is done by removing references to those groups and asking demographic info at the the end of the survey. Never ask demographic information at the beginning.
8. Define exclusion criteria, stopping points, and which variables matter most before data collection
By defining before the study what variables really matter, such as how you will include or exclude participants and when you’ll stop collecting data, you’ll minimize the “wiggle-room” that makes chance findings appear like they’re meaningful. In particular, have fleshed out:
- How many respondents you’ll be recruiting
- Which variables in particular will be the “go-to” answer for your research question
- What the “rule” will be for excluding participants
9. Use a quality control question
People are motivated to take surveys as quickly as possible to get their payout and get on with their day. A good survey includes a quality control question that allows you to identify respondents who breezed through your survey without paying attention – whose answers could wipe out the validity of your survey. Here’s the one I use for my own surveys:
People vary in the amount they pay attention to these kinds of surveys. Some take them seriously and read each question, whereas others go very quickly and barely read the questions at all. If you have read this question carefully, please do not respond to the question below.
A) Watching television
B) Playing sports
C) Reading
D) Listening to Music
E) Exercising
F) Other ________________
10. Pilot test and get feedback!
Collecting data without pilot testing and getting feedback is like submitting a rough draft for the final term paper. Having a fresh set of eyes and making sure the data are being compiled in your survey correctly is hugely important. Why?
- What seems obvious to you might be confusing or misleading to others.
- The survey should be tested to make sure it displays and flows properly.
- You can get an accurate assessment of how long the survey takes – and pay respondents accordingly.
[Form id=”7″]