SAES-422 Multistate Research Activity Accomplishments Report

Status: Approved

Basic Information

Participants

Virginia Lesser (Chair: Oregon) lesser@stat.orst.edu; Jim Christenson (AES Advisor: Arizona) jimc@ag.arizona.edu; John Allen (Utah State) johna@ext.usu.edu; Shorna Broussard Allred (New York (Cornell)) srb237@cornell.edu; Don Dillman (Washington) dillman@wsu.edu; Courtney Flint (University of Illinois (Champaign-Urbana); Glenn Israel (Florida) gdi@ifas.ufl.edu; Fred Lorenz (Secretary: Iowa) folorenz@iastate.edu; Bob Mason (Oregon State) masonr@stat.orst.edu; Dave Peters (Iowa) dpeters@iastate.edu; Rob Robertson (New Hampshire) robertr@cisuix.unh.edu; Steve Swinford (Montana) swinford@montana.edu; Marilyn Smith (Nevada) smithm@unce.unr.edu

Jim Christensen: opening comments. Jim reported that the new proposal was officially accepted as WERA1010 but there had been criticisms of the outreach components. He underscored the need to document outreach activities. The committee discussed ways to make sure the usefulness of the surveys.The WERA1010 committee members agreed to address this issue and made it an agenda item. We are approved for the next five years. The meeting ended by discussing topics for next year. IRB. Courtney: we are picking up on variation in IRB requirements. What should we do? Steve recalled that the job of IRB is not to re-write research proposals but to protect human subjects. What is acceptable at different institutions? John: Can we take inventory and see the differences? Don: one way to move ahead is to make it an agenda item. Don: another agenda item: how do experiments get done? Open ended. How should it be conceptualize and what are the cells in the study? Think about the variables and how they should link together. What are the response variables: words, themes, elaborations, etc. [Courtney: how do you count words? Steve: n spaces +1; Ginny: count. John: count]. What are the predictors: size, box vs ~box, lines vs ~lines, motived vs. ~motivated. [Steve: early respondents have more words]. Gender and handwriting may be variables. Just mail. There is a big set of issues. Delivery sequence files (DSF). Ginny: Any further studies? Not certain at this time. The Committee has tentatively agreed to meet again February 25  26, 2010 at the same place, the Best Western on Oracle, Tucson.

Accomplishments

Postal delivery sequence, compared with RDD. The committee began discussions by addressing issues related to the decline in telephone as a means of collecting survey data. In 2008 about 18% of household nationally have no land lines, and the proportion is getting lower all the time. One alternative is to add the postal delivery sequence file (USPS DSF) to sampling frame. USPS delivery sequence files (DSF) have 97% coverage, and it can be used to deliver mail questionnaires, but it does not have names. One theme in the discussion that followed was to use the DSF to improve internet coverage. Don Dillman and Ginny Lesser noted that coverage using the internet is a serious problem; only about 62% of households have internet access in the home. Mixed mode web surveys. Don reported on joint research with Benjamin Messer on the effectiveness of mail contact procedures in obtaining survey response by web. Don circulated his power-point presentation and a copy of the Washington Community Survey (WCS) questionnaire on which his presentation was based. A major theme of his report  and the meetings in general  concerned comparisons of telephone, mail, and web or web/mail surveys. In an earlier survey reported last year, Don and his colleagues used DSF to obtain statewide sample of household in Washington, and they obtained a 70% response on mail questionnaire. In this study, they looked at mail and web combinations, and they also used incentives in a factorial design. They personalized questionnaires for the four regions of the state by naming the region on the cover. The web and paper questionnaires looked the same. When contacted, respondents had a choice of mail or internet, and detailed internet contact instructions were provided. Don included a handout with the results on response rates by mode (mail vs wed) with and w/o incentives. The essential findings were that mail questionnaires got a higher response rate than web, and that incentives worked to increase all response rates and especially internet responses. One conclusion is that we cant just go to the web, because the demographics are very different. Don summarized his major findings: (1) incentives ($5) improve web response rates by about 7%; (2) URL works about as well as web card; (3) no difference by time of year; (4) withholding mail questionnaires drives significant numbers to the web; (5) mail alone has the highest response rate; (6) non-response error is substantial along demographic lines, especially among internet respondents, and (7) item non-response was about the same. Don concluded with some thoughts what to do next. Additional research could address the following design variations: (1) try different topics; (2) implement more intensive follow up procedures; (3) try longer and more complicated questionnaires. Glenn Israel continued this theme by discussing his work on extension customer satisfaction over the past 6 years as it relates to differences in mode. His extension service is interested in web surveys and enthused about Survey Monkey. In the past, his surveys have manipulated spacing (box size) for open-ended questions. This year his research asked questions relating to reducing data collection costs, quality differences by mode, and demographic effects by mode. Glenn explained his research design. The options for the 1st contact were: (1) mail only; (2) mail & web choice; and (3) web preference with URL and pin number only. In the 2nd contact, the 3rd group had a questionnaire option. There were differences by mode: mail was over 60 and the web preference group had 45% via mail on second contact. All groups ended up nearly the same but the web had higher earlier rates. Overall, Glenn realized a small net savings by including web surveys because 130 respondents quickly completed web surveys and required no costly follow up.. Item non-response was 3  4% by all modes and there were no significant differences. Demographics variables  age & sex & ethnic  were examined, and the only difference was due to age and sex. Younger respondents and women were more likely to complete the web surveys. In addition, higher educated and urban respondents were more likely to use the web. Web respondents were more likely to use extension and more likely to have gone to extensions solutions for your life website. On satisfaction: web clients are more likely to say very satisfied, compared to satisfied or less, but not by much. Glenns questionnaires also contained 2 open-ended questions. Overall, over a series of questions, web respondents gave more words and more elaborations to a question about problem solutions and more words and themes in response to a question about suggestions for improvement. Glenn connected it to leverage-salience theory, which argues that the invitation to give feedback might have more importance to some clients than others. Thus, the reason web respondents gave more extensive responses may have been because they were more likely to have been users of extension services. Glenn also concluded that web surveys cannot be done alone (w/o mail follow-up); however data quality was the same and some errors were reduced. One additional thought: the web cover letter is more complex and more negative, and that may affect responses. Discussion followed regarding the best way to design mixed mode studies. Ginny next reported on three studies using the USPS DSF. They were all studies of Oregon populations and sampling. The 2008 Department of Transportation Study had the following numbers and design: 1000 each send out by (1) mail, (2) web/mail w/letter instructions and (3) web/mail with special 5-step instructions, and (4) telephone. The four response rates: 35.5%; 28.5%, 22.8% and 31.4%, respectively. Ginny passed around the special, detailed instructions which were compared to standard instructions. The special instructions did worst! When it comes to missing data, the mail and web/mail were trivial; no real difference. Which mode works best in reflecting the states demographic distribution? For the 2008 survey, and the 2006 survey that was similar to it, mail and web/mail response rates are much more similar to the sex distribution of the population than telephone. For employment, telephone was more in line with population than mail or web. Ginny also discovered that there were differences between modes in response to questions, and the differences were with the telephone where the 1st response category is selected relatively more often (primacy effects). What about cost? We know what it costs to do a phone survey, and Ginny knows what it takes to get the same number of mail and mail/web responses. The cost ratios of mail to phone is 52.7% and the web-mail to phone is 46%, so it is cheaper to do mail and even cheaper to do a web-mail. Ginnys cost: phone is $45.44; mail is $23.96 and web-mail is $20.89. The web-mail is cheaper because those who first answer the web do not get a mail follow-up. The web-mail advantage is greater for large sample surveys. Bottom line: why web? Costs! Postage and printing are expensive. If you can effectively get people to use the web, the costs of surveys decline. In a follow up discussion, Ginny pointed out that post-stratification weighting remains important: in all analyses, it is important to account for demographic differences between sample and population for all modes by post-stratifying the sample. In Ginnys 2nd study, she reduced the 5 step web instructions to a 2 point instruction and the results are mail (31.1%); web/mail w/o insert (26.0%) and web w/insert (21.1%). When examining response rates among the youngest age group (18  25), which constitute 13.5% of the population, she recorded response rates of 1% by mail, 3.1% among the web/mail w/o insert group, and 0.08% among the web/mail with insert. Ginny concluded with a study of travel experiences of the disables. She obtained 1043 responses from MS patients who have flown on a plane. They had been randomized into mail only, mail/web and web/mail. Of these, 754 had both mail and email. Number of completes after the 1st mailing was highest by mail (162/251 = 65%), next was mail/web (127/251 = 55%) and web/mail (40/252 = 16%). The final response rates were about the same (73-77%) for the three groups. A discussion followed about ideal circumstances to compare modes. One place where you could have a good comparison between web and mail surveys is with distance education participants, where web is part of daily life. Compare mail only vs. web only. You could add mail/web and web/mail. In further discussion, Don reported asking participants in focus groups why they prefer mail questionnaires. Responses: accessibility; tactile handling; easier and more comfortable. He also asked why people who choose webs do so. Response: the same, plus right to multitask. Open-ended survey questions: Courtney Flint prepared a study comparing open-ended questions with lines and with boxes. Her study was done on two different forestry surveys in Illinois. The surveys were distributed to some who participated in 2004, and some first time respondents, but they did not randomize into line and space groups. The survey instruments had boxes vs. lines. With respect to the open-ended questions near the end of the questionnaire, respondents were much more likely to offer responses if they have lines (21.9% vs. 9.7%). There were no differences in number of characters. John Allen reported having addressed the same issue. John collected data in east central Utah. He and his colleagues compared box vs. no box and lines vs. box. About 63% of the respondents were male, which is not the same as most surveys report. The open-ended questions had to do with positive and negative themes, which were elaborated from simple to complex themes. So, does box vs. line matter? They did not find significant differences in the number of words written between box and no box or between box and lines. Women were more likely to provide more responses. Discussion followed that related to the conditions under which box size makes a difference and under which conditions lines encourage more responses and more complex responses. In response to this discussion, Glenn Israel recalled using a 2x2 design to examine box vs ~box and extra vs. ~extra verbal instructions. There were 2 open-ended questions and they measured the number of words, themes, and elaborations. The box generated more words, themes and elaborations for one question (how problem was solved). When the extra verbal instructions were added, it had significant effects favoring boxes. The number of words and level of elaboration had significant effects; themes did not. Glenn hasnt sorted all of this through yet, but the boxes seem to generate more responses. Ginny did two open-ended experiments, each with 3 questions. Her studies had 2x2 factorial designs: box vs lines and lines vs nothing. For average number of characters, only one question of 3 had an effect and the two ~box treatments had more characters. For the second experiment, she compared boxes and no boxes, and nothing was significant. Overall, more words and commentary seem to follow from ~box condition. State reports Steve Swinford (Montana) discussed web-based surveys and included an experiment on the middle category. Two panel designed studies are coming up, one on gender roles, sexual attitudes and alcohol behaviors leading to a planned intervention. There is room for several general/specific items. The 2nd is a first year initiative study and they are testing some scales. The sample size is likely to be in the 300  400 range. His class is currently doing an evaluation of a text book. Some of the questionnaires have lines (with 88 points) and some have 5 point SD  SA; they can be compared. Don (Washington) circulated his state report outlining his current studies and his plans for the coming year. Overall, he documented a 10 point summary of work in progress and planned work. To highlight, Don points out that he will be moving away from issues relating to visual design and instead focus on ways of using addressed-based sampling to improve survey response rates and keep non-response low. He is planning research on whether offering respondents a choice of modes  mail and web  lowers or increases response rates. Shorna Broussard Allred (New York) reported on several themes. Three projects next year are (1) survey of initial officials in NY, half for whom have e-mail addresses; (2) a survey of forest owners; and (3) the human dimensions research is dealing with recall bias. Shorna handed out two tables. One table showed response rates at different times of year, ranging from 37.9% - 48.8%. Surveys conducted over portions of the year (seasonal phase surveys) focused on attitude questions and some descriptive data, whereas the annual survey covering the whole year focused on descriptive aspects of fishing (where; when; how much; etc). Each phase sample and the annual sample are independent and drawn from license purchases. Questions and answers followed. Question: can we create a correction factor to get at overestimation and underestimation? Ginny: any cognitive interviewing of fishing recall? Shorna: the advice was provided by the bureaucrats rather than the fishermen, and they suggested 3 recall periods per year. Ginny: annual estimates are lower than the sum of the shorter term recalls. Why? Ginny: Have diaries been tried? Shorna: No? Don: the problem with diaries is recovery; they are often lost, etc. Concluding advice: the changes obtained from the seasonal reports may be valuable even if the absolute values may not be accurate. Marilyn Smith (Nevada) is an extension specialist and has been successful in outreach. Some of the examples of surveys include providing assistance to other field staff. She offered an example of how surveys sometimes are implemented in rural areas: in one small rural place, the postmaster put a survey in each box in the post office at no cost. Result: a good response rate. Marilyn repored that she gets calls from others to help with impact assessment. She has a program through the BLM to provide programming to develop work skills among young people. These programs offer special challenges for researchers. For example, one funding agency requires Marilyn and her staff to use the agencys survey instrument, and it has bad questions such as how wrong is it to use drugs? On some occasions, questions about neighborhood are interpreted by adolescents as their gang. Courtney Flint (Illinois) reported on some of her recent surveys. They included surveys on the Kenai Bark Beetle re-study in 6 communities, with a 42% response rate and some panel data. The panel respondents changed less than the difference between new (wave 2) and old respondents (wave 1). Another study, the Southern Illinois Private Forest Landowner survey, was conducted in parcels of forests, and had a 48% response rate. She used Survey Monkey to conduct a community emergency response team survey, with a 28% response rate. It was a national survey. A rural China village household survey was done by a Chinese graduate student, and it had recall issues dealing with sustaining ones household. They had a 95% response rate. Courtney taught a field methods class in which they did face of face survey at the Deer Festival in Golconda. It was a great lesson in sources of error. In an Alaska Tribal survey of three communities on berries and climate change (funded by the EPA), they are doing a drop off and pick up survey as well as a mail component. Are the methods compatible? In some communities, they want to use both methods in order to involve children of the community. She outlined several surveys that are planned for next year. Examples included a Colorado Mountain Pine Beetle restudy in nine Colorado counties. Glen Israel (Florida) elaborated on the analysis from yesterday. He found some evidence of an interaction effect that the propensity to respond to an open-ended questions. Whether a box was used when provided on whether the respondent was male or female, with females using the boxes more often. He also looked at some of the distributions and found outliers: three respondents wrote enormous numbers of words and skewed the means. The number of words on the web was higher than on mail questionnaire. Rob Robertson (New Hampshire) reported on the burden of federal regulation requirements for information from fisherman. He asked about their preferences for ways of giving information: 57% preferred mail survey from among the 65% who responded. Those who preferred internet surveys completed more internet (4.1/year) compared with 1 per year. Those who preferred mail surveys also completed more mail surveys, etc. Rob continues to work with others on virtual rule making, through a Wickapedia site which solicits public comments. He did a large watershed survey (3000) on one mailing. Bob Mason (Oregon) is looking at marketing studies, especially at Oregon State. It is promoting its nil expertise and doing seemingly unethical studies, especially in dealing with smoking. Dave Peters (Iowa/Nebraska) reported on a survey where he oversamples some communities with large fractions of underrepresented populations. Dave is also on an NRI grant on recruitment and retention of employers. The surveys will oversample smaller communities and administer a survey including 10 occupation categories and the number of openings in each. It also asks about job requirements and skills. Dave showed us the survey instrument they are developing. They will likely develop two forms of the survey, one shorter than the other. The survey is conducted by the University of Nebraska. Fred Lorenz (Iowa) reported on Ginnys Oregon DOT surveys that have a general question that precedes and follows six specific items. We had four replications. Consistent with previous studies, means on the general were higher in the S/G condition than in the G/S condition. When regressing the general against the specific items, the R-square was higher in the S/G condition than the G/S. There was no convincing evidence of a spillover from the general question in the G/S condition and no evidence of recency effects in the S/G condition. Ginny Lesser (Oregon) reported on changing response rates over the past decade. The adjusted response scores continue to decline at a 1.8% decrease over the decade (since June 2001) but the rate of decline seems to be coming to an end. The decline is for all age groups, but lowest responses found in the youngest age group. The equation reads: Y = 0.51  0.0015T (Rsq = 0.539). Ginny also reported on her work with evaluating the effectiveness of probability sampling, published in Environmental Monitoring and Assessment (2009), and in applying multiple imputation to account for item non-response. John Allen (Utah) has seven surveys in process, and it reflects work on emerging entrepreneurs with Korsching (Iowa). Other surveys may involve web/mail differences that involve snowball sampling that has a combination of mail and web. It is on formal and informal leadership structure. Another project involves the adaption and diffusion of new oil drilling technology. Studies are also being doing in China and the US, and extending to Russia and other places.

Impacts

Publications

Log Out ?

Are you sure you want to log out?

Press No if you want to continue work. Press Yes to logout current user.

Report a Bug
Report a Bug

Describe your bug clearly, including the steps you used to create it.