SAES-422 Multistate Research Activity Accomplishments Report
Sections
Status: Approved
Basic Information
- Project No. and Title: WERA1010 : Improving Data Quality from Sample Surveys to foster Agricultural, Community and Development in Rural America
- Period Covered: 10/01/2017 to 09/30/2018
- Date of Report: 04/13/2018
- Annual Meeting Dates: 02/22/2018 to 02/23/2018
Participants
1. Ginny Lesser – Oregon State University 2. Shannon Norris –Texas A&M University 3. Lacy Roberts – Texas A&M University 4. Gladys Walter – Texas A&M University 5. Stacy DeWalt – Texas A&M University 6. Katie Dentzman – Washington State University 7. Kenny Wallen – University of Arkansas - Monticello 8. Ashley Yopp – University of Georgia 9. Lou Swanson (Administrative Advisor) – Colorado State University 10. Jan Larson – Iowa State University 11. Natalie Davis – Texas A&M University 12. Billy McKim – Texas A&M University 13. Todd Rockwood – University of Minnesota 14. Fern (Bunny) Willits – Penn State University 15. Yano Prasetyo – University of Missouri 16. Glenn Israel (Chair) – University of Florida 17. Don Dillman – Washington State University
Committee business
- Katie Dentzman volunteered to serve as the meeting secretary.
- A Brief history of group was provided for new attendees by Chair Glenn Israel. He described:
- Formally organized into regional research project 183 - transitioned to WERA 1001 (a coordinating committee) for 5 years. Renewed twice as WERA 1010.
- Several publications have come out of coordinated research projects
- An entire issue of Rural Sociology dedicated to measurement
- Testing of incentives in late 90's and early 2000's
- A number of articles on survey item nonresponse in Survey Practice
- 2016 special issue of the Journal of Rural Social Sciences
- One goal for the next two days - look for opportunities to collaborate and coordinate efforts on data collection and publication
- Registration fee for meeting room was addressed.
- Plans for dinner at the Skyline Country Club Planning for Thursday evening were discussed.
- The chair requested that attendees provide materials for the annual report, including a word document of the state report and presentations at the meeting.
Comments from Administrative Advisor
Lou Swanson reviewed the renewal process and Glenn Israel reported that the renewal proposal would be completed and submitted by March 1, 2018. Swanson also emphasized the importance for this group of showing impact.
- How do we impact extension, for example, in the Western Region
- Demonstrate clearly (ex. for NIFA) our accomplishments
Thematic Issues for discussion
Effective survey communications (updated from 2017) – Don Dillman (WSU)
Dillman commented that Census Bureau (and all surveys) are having issues with communication and trust. He noted that address-based samples have the best household coverage and a mail contact can be used to push to web. Evidence suggests mail-only strategies have highest response rates. Dillman argued for a comprehensive design strategy based on either social exchange or another theory. Social Exchange hasn't been tested against another comprehensive design strategy. One candidate is 'Presuasion' theory by Robert Cialdini, which posits 1) establish trust, 2) create a privileged moment, 3) transition from attention to response through linkage, and 4) use magnetizers such as adding mystery to the task of responding.
Dillman said communication occurs via envelopes, letters explaining the request, the questionnaire, and any enclosures. This experiment focused on paper questionnaires and 4 letter contacts. The paper questionnaires had different cover pages (generic versus detailed with counties and pictures), different first question layout and wording (generic first question versus magnetized first question talking about how great it is to live in West Virginia), and different callouts (questions without context versus callout box with context and signature). Likewise, the contact letters had different content, with Presuasion being much more personal, explaining researcher's experience and connection with the area, referencing specific questions in the questionnaire, mentioning specific discussions with local people. The main differences were in first paragraph or two of the letters.
The experiment used a 2x2 design: Presuasion letter and questionnaire, Exchange letter and Presuasion questionnaire, Presuasion letter and Exchange questionnaire, and Exchange letter and questionnaire. It had two versions of the survey instrument 'What's next for Southern West Virginia?' in two different formats - one social exchange and one persuasion.
Analysis showed that presuasion letters decreased response rates compared to exchange letters. Questionnaire format made no difference: Exchange letter and questionnaire 24%; Exchange letter and Presuasion questionnaire 23%; Presuasion letter and exchange questionnaire 19%; Presuasion letter and questionnaire 19%; Statistically significant differences. In addition, item non-response had no significant differences and the 'additional comments' question had no significant differences.
In the following discussion, Kenny Wallen said he did something similar with a web-based contact letter for his dissertation and got no statistically significant differences. It was asked, “Is there an issue of perceived authenticity?” Dillman suggested that further investigation on shaping letters is needed and we should also start trying these ideas with web-push methods. Other observations included:
- Color questionnaires do not increase response rate and are also more expensive
- Saying in the cover letter that taking the survey online saves the state money increases response rate
- Personalization of marketing emails/etc. is making it increasingly difficult for survey research and especially persuasion formatting.
Online panels for testing items (Todd Rockwood, Minnesota)
A number of rural sociologists are using commercial panel samples. Specifically, using online panels for question diagnostics – a stage before cognitive interviewing on topics such as women and their bladders. Rockwood described how there were many MDs and PhDs involved in item development and refinement; Lots of direct care providers, 2 survey methods people.
The study focuses on measurement issues – we can generally rate our health but are NOT aware of specific body parts/organs etc. unless there is a health problem with them. Exceptions include your brain and your mouth, i.e., generally we have no clue what is going on with our bladders. A solution is to ask the right questions so that bladder health can be inferred. The approach uses the notion of the pathological to inform the understanding of the normal, as opposed to asking about health of your bladder, ask about the quality of your bladder compared to others.
The project has to deal with lots of measurement error - Desirability, Recall Error, and some Exceptions. The online panel can help identify sources of error for questions and how to solve these issues. These can then be refined in subsequent cognitive interviews The experiment involved 2000 women on a panel to assess questions with regard to:
- Clear versus vague
- How much you had to think about the question
- What different parts of a question mean to you (i.e. ‘too frequently’, ‘have you ever’)
- Are respondents adding exceptions into the question? (i.e. pregnancy)
Bunny Willits asked whether we want to know objective or subjective bladder health. What is the point of these survey questions? Rockwood hopes to convince NIH to drop ineffective questions, interpret the data correctly given how respondents will perceive questions.
Framing (message) and Response Rates - Kenny Wallen, Carena van Riper, and Elizabeth Golebie
The study targets anglers for a survey of aquatic invasive species in the Great Lakes. Research questions include: How to frame cover letters to increase response rates? Does it work and if so which frames? If we frame the cover letter aligning to cultural values, will response rates go up? How does this occur across multiple contacts? Wallen proposes to use Cultural Value Theory, which has four main categories: Fatalism, Hierarchy, Individualism, and Egalitarianism and implies how people should be grouped and how people should interact. The experiment has four versions of the cover letter using random assignment, using different language and images;
- Fatalist – individual needs and survival
- Egalitarian – save the ecosystem for group/future generations
- Individualism – make sure you get the most benefit
- Hierarchical – regulation and preservation of tradition
Wallen plans to compare value of cover letter to self-reported value of respondents. Also, the study will not use incentives or framing of questionnaire.
During discussion by the group, there wasa question of whether the cover letter might influence how they self-report values. That is, the cover letter may have already framed their response – primed them. On the other hand, it might be a more tailored, authentic version of ‘Presuasion’. In addition, social cognitive theory might prove a better fit than social exchange theory for the study.
Don Dillman said the letters won’t make much difference, but even less if the message is hidden half way into the cover letter; Put it in the first paragraph. He also suggested using more direct, less academic language and changing messages in a second and third mailing. Finally, Dillman noted that having the option to go online OR do a paper survey will reduce response rate. Ginny Lesser recommended using paper only for this population. Glenn Israel suggested the non-manipulated parts of the cover letter are already value-laden, so it is important to make sure you aren’t confounding the message. Todd Rockwood suggested putting the framing message on the envelope so it has a higher chance of being read; He opined that people don’t tend to actually read cover letters but there was disagreement on this point. Finally, Lou Swanson said ‘invasive species’ is not a good term; he noted Brown trout and zebra mussel are both invasive, but have completely different connotations, so it might be a good idea to add ‘such as zebra mussels’.
Ginny Lesser (Oregon)
Lesser reported on a multi-mode study comparing general and specific order of questions using a survey on highway conditions. The experiment had two groups: 1) General question first and then specific and 2) Specific and then general. The questions asked about conditions of highways, with one on the general condition of highways and other items for specific issues related to highway conditions (litter, potholes, etc.). Sample size – 4,000; Half got all mail and half got letter directing to web, then mail for non-respondents. She found a significant impact of question order (general last -> higher satisfaction rating) but there was no impact of age, gender, or delivery mode, except that there was an interaction effect with gender with women reported being more satisfied when the general question was last.
In the ensuing discussion, Don Dillman commented that asking the generic question last makes sense because you establish recall with the specific questions first, but Ginny Lesser said that sometimes you want that first impression. Todd Rockwood also disagreed with Don, saying he thinks the general question should be first. Likewise, Bunny Willets said she prefers general question first because otherwise you are priming them too much.
Lesser also reported that she has been comparing response rates with this survey over time since 1999. The response rate over 16 years has steadily gone down and is consistent across gender and age.
For the next year, Lesser has several studies planned, including a DMV survey with mixed versus single mode approach and it will use an instrument that is double-sided Spanish/English. She plans to assess order effects, as well as repeat the Probability vs. Non-probability panel. In the initial study, there was evidence that web first, then mail is the most cost efficient for probability samples while non-probability panel was by far cheapest.
Applying an herbicide resistance survey in the Pacific Northwest – Katie Dentzman (Washington)
- Weed management survey
- Revision of previous survey questions
- Trying to figure out where to get an effective sampling frame and maybe it doesn’t need to be probabilistic
CSSM Report - Jan (Iowa State)
- Data Science Services is the biggest part
- Survey Research Center is a smaller part doing human subjects research
- Currently doing a LOT of web surveys with mixed success
- Success transferred faculty ‘hours worked’ study to online
- Renamed from ‘survey’ to ‘report’ online
- Provide a paper ‘worksheet’
- Has definitely increased response rate
- Issues with NASS means CSSM is getting more business and is swamped with mail surveys
- Do some telephone surveys but it has significantly fallen off
- Ex.: Iowa Farmland Ownership and Tenure Study
- A lot of observational and field data collection
- Ex:. Surveys in driver license offices
- People tend to be cooperative, get quite a few responses in a relatively short, relatively inexpensive period of time
- Mixed field, phone, mail, and device data collection
- Looking at what people over or underreport re: physical activity
- Testing 24 hour recall
FEBRUARY 23rd
Comparing multimode and single mode probability based surveys with a non-probability survey - Ginny Lesser (Oregon)
Lesser suggested that adding some non-probability samples (such as panel data) to a probability sample might be a way of dealing with increasing nonresponse. Her research included a probability sample with half in a mail only group and half in a web+mail mixed mode group. The population was adults in Oregon to assess satisfaction with highway maintenance and 7400 addresses were purchased from MSG. The non-probability sample was purchased from Qualtrics, where panel members recruited from business partners. Lesser requested Oregon residents with specific demographics. The probability sample obtained a 27% response rate for the mail only group and 21% for mail+web group. An incentive “I love Oregon” sticker did nothing to change response rate. The non-probability sample had 7250 invitations sent and eventually they got 500 responses paid for. With regard to the results, 66% of panel estimates were outside confidence interval of probability sample. Overall, panel respondents were more positive (i.e. they tended to say they were more satisfied compared to the non-panel people). Panel respondents also were more likely to support ideas to generate new revenue for road maintenance while the probability sample was more likely to give no answer.
Lessons learned while planning and conducting a survey of Florida residents about climate change options
Glenn Israel noted that the UF/IFAS Center for Public Issues Education uses Qualtrics opt-in samples. Qualtrics quota opt-in samples are extremely cheap - $5 per response and very fast – 500 completes in a week to 10 days. He is comparing these types of samples to a probability address-based sample on the topic of climate change. The Qualtrics nonprobability sample requested a Florida population with specific demographics and excluded‘Speeders’ and those who failed an attention check question. Of over 8,000 who were contacted, 856 accessed survey and 514 useable responses were obtained. The probability address-based sample was purchased from SSI for the mail (n=1500) and mixed-mode (n=500) survey. Of the sample, 317 useable responses were collected (a 17% response rate). With regard to the results, the overall item response rate was similar, but quota sample was more complete. One reason might be that, there is a norm that you will complete the entire survey for opt-in panelists. On the other hand, address-based survey respondents had a lower response rate to the open-ended comment item at the end of the survey but much higher median number of words than the opt-in quota sample; the former’s comments were more substantive too. Address-based sample respondents also were more likely to have a higher score on a true-false knowledge of climate change index than the opt-in panel. Finally, the two samples generated different distributions of people along the Six America’s Scale. Over all, the results similar to what Ginny Lesser found: panels are cheap, fast, low item non-response, but there are serious concerns about data quality. One might speculate that people are less interested in subject and more interested in getting their monetary reward for being in the panel. Consequently, there is a need to learn about incentive structure and how panels are recruited.
Yano Prasetyo – What do rural Missouri residents view as the assets and challenges of their communities?
Yano Prasetyo is working with Hua Qin on comparing perceptions with objective measures of community. They used Qualtrics to survey community leaders via email sample and factor analysis to identify important issues for Missouri residents. Respondents generally think their community doesn’t need to change, but could if necessary. They love their community, but are concerned about the future. Community issues include jobs, housing, lack of choice, poverty, and drugs.
Effects of follow-up contacts on sample characteristics and substantive research findings in mail surveys: An exploratory analysis – Bunny Willits and Gene Theodori
They focused on mail surveys, which is still the dominant mode of data collection. Willits noted that multiple contacts are one of the most effective ways to increase response rates. Their research questions asked, “Do demographic characteristics predict who responds to 1st, 2nd, 3rd mail requests to complete survey?” and “Is relationship between demographics and attitudes predicted by which mailing they respond to?” Willits noted that a number of fields (e.g., psychology and medicine) conduct research without representative samples. She raised the question, “Does representativeness really matter?” She noted that we are often interested in relationships between variables for testing social theories.
Willits reported on data from two surveys. The Pennsylvania study examined knowledge of gas drilling and found the most important predictors in wave 1 were gender, education, and income. In waves 2 and 3, the most important predictors were gender, income, and education. This suggests that the same predictors are important across waves. The Texas study results showed there was no significantly different demographics by wave except for education, where the second wave gets more responses from low education respondents. So, Willits concluded that there is consistency across time in terms of relationships. Willits argued, if we’re increasing number of responses but NOT diversity of responses, what is the point? We have to know if we are/aren’t increasing the diversity of responses. We need to have better data so that we can determine whether different waves respond to non-demographic questions differently. Finally, Willets solicited committee members for data sets with dates on them to replicate this study.
A picture is worth 150 responses - Stacey and Gladys (Texas)
The research explored differences in social media use in personal and professional contexts and data collection was conducted at the Country Radio Seminar. They used Qualtrics to disseminate survey at the seminar and offered a professional social media headshot as an incentive to participate. A convenience sample of 150 people at the conference took the survey. The researchers used a poster to prominently advertise the incentive for participating in the study. The incentive offered to provide participants with professional-quality headships. The researchers reported that most participants came to them rather than being directly solicited. Advantages of the incentive include being a relatively cheap incentive, students gained photography experience, and it provided a networking opportunity. Disadvantages included spending a lots of time to editing and emailing photos, equipment needed, and survey length (it can’t be too long). The findings suggest several implications, including that the method can be useful for collecting data at conferences/industry events, it is important to connect theme of survey and incentive being offered and to ensure incentive is useable/practical, and to connect survey to an interest of the conference organizers.
How do millennials check their mail? - Stacey (Texas)
This study used iPads and Qualtrics for an intercept of students on campus and obtained 1,700 completes. They found women tend to check their mail more often than men; the 20-23 age range tend to check mail weekly or daily; and Millennials overwhelmingly check their email on their smartphones.
Sexual orientation and gender questions in rural areas - Bill McKim and Ashley Yopp (Texas)
They noted that the current administration removed sexual orientation and gender identity from national aging survey. They led a discussion of how the inclusion/exclusion of these kind of items might impact response rates in rural communities. Specifically, the question was asked, “How does a binary gender item perform in comparison to the Human Rights Campaign suggested item?”
Eye tracking: Do order and configuration matter - Bill McKim (Texas)
McKim reported on an eye-tracking study of question format where the topics examined items asking about importance then ability versus ability then importance. The two formats for the double matrices (side by side) were importance on the left and ability on the right versus ability on the right and importance on the left. The research subjects were college students and recruitment is tough when your study is called ‘eye-tracking’. The data collection involves recording a lot of information, which results in massive amounts of data. This creates problems with storage and analysis, even with 30 respondents. The analyses examined 1) Total focus duration, 2) Number of focal points, 3) Duration of focus per focal point, and 4) Pattern of focal points. The results suggest that a double matrix isn’t necessarily less work – respondents go back and check a lot. It isn’t saving them any time. Discussion including asking what happens in a page 1 and page 2 versus matrices that are side by side, whether a different visual design might change where people’s eyes go and how much work they are doing on various matrices, and is there a cultural component that is changing how people consume information on a survey, such as social media.
Plans for next year’s meeting
- February 21 and 22 – Thursday and Friday at the Best Western
- Meeting is conditional on renewal of the WERA 1010
Accomplishments
An agenda was developed and the coordinating committee held its annual meeting in February, 2018. Participating members discussed several important topics affecting error in agricultural and rural surveys. These topics included: Effective survey communication and Comparison of online opt-in panels with address-based sample surveys. In addition, members reported on survey research studies being conducted or planned in their state and provided feedback to others. Members from several states discussed plans for studies, including additional comparisons of nonprobability samples in on-line surveys with address-based probability samples using mail and/or mixed-mode surveys in order to assess the strengths and weaknesses of these technologies, assessing the utility of different theories for inviting people to respond to a survey, assessing the order of concepts in question stems and responses for satisfaction items, and assessing the effects of follow-up contacts on sample characteristics and substantive research findings, as well as other topics.
During the year, committee members have been active in publishing research in journal articles, presenting papers and posters at relevant conferences, and developing educational materials available to Extension professionals and the public during the past year. This includes publishing 3 survey methods-related articles, updating 9 publications for Extension and outreach audiences, and 2 presentations at professional conferences where attendees are members of the target audience for this project. The above Extension publications are part of the web-based Savvy Survey Series, in which the 20 papers (incorporating WERA reseach) have generated over 28,000 visits. Colleagues report using these publications for workshops or professional development in Arkansas and Kentucky. In addition, the member from Florida conducted a 1-day workshop for new Extension professionals on developing quality questionnaires, implementing surveys and analyzing survey data for approximately 40 persons each year with incorporated WERA 1010 research. Finally, a member from Washington has taught a survey methods class for 35 students, which makes extensive use of WERA 1010 research.
Impacts
- 1. Recipients of the research findings and outreach activities of coordinating committee members have more accurate information for making decisions about conducting surveys and/or assessing the strengths and weaknesses of survey data. This, in turn, can contribute to appropriate project- and policy-level decisions.
Publications
- Dillman, D.A. 2017. The promise and challenge of pushing respondents to the Web in mixed-mode surveys. Survey Methodology, Statistics Canada, Catalogue 12‑001‑X, Vol. 43, No. 1. Paper available as PDF (English) at: http://www.statcan.gc.ca/pub/12-001-x/2017001/article/14836-eng.pdf.
- McMaster, H., LeardMann, C. A., Speigle, S., & Dillman, D. A. 2017. An Experimental Comparison of Web-push vs. Paper-only survey Procedures for Conducting an In-Depth health Survey of Military Spouses. BMC Medical Research Methodology. Available at: https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-017-0337-1.
- Newberry, III, M. G., & Israel, G. D. 2017. Comparing Two Web/Mail Mixed-Mode Contact Protocols to a Unimode Mail Survey. Field Methods, 29(3), 281-298. Prepublished June 5, 2017. doi: 10.1177/1525822X17693804.