WERA_OLD1010: Reduction of Error in Rural and Agricultural Surveys

(Multistate Research Coordinating Committee and Information Exchange Group)

Status: Inactive/Terminating

SAES-422 Reports

Annual/Termination Reports:

[04/24/2009] [04/12/2010] [04/05/2011] [03/10/2012]

Date of Annual Report: 04/24/2009

Report Information

Annual Meeting Dates: 02/26/2009 - 02/27/2009
Period the Report Covers: 10/01/2007 - 09/01/2008

Participants

Virginia Lesser (Chair: Oregon) lesser@stat.orst.edu;
Jim Christenson (AES Advisor: Arizona) jimc@ag.arizona.edu;

John Allen (Utah State) johna@ext.usu.edu;
Shorna Broussard Allred (New York (Cornell)) srb237@cornell.edu;
Don Dillman (Washington) dillman@wsu.edu;
Courtney Flint (University of Illinois (Champaign-Urbana);
Glenn Israel (Florida) gdi@ifas.ufl.edu;
Fred Lorenz (Secretary: Iowa) folorenz@iastate.edu;
Bob Mason (Oregon State) masonr@stat.orst.edu;
Dave Peters (Iowa) dpeters@iastate.edu;
Rob Robertson (New Hampshire) robertr@cisuix.unh.edu;
Steve Swinford (Montana) swinford@montana.edu;
Marilyn Smith (Nevada) smithm@unce.unr.edu





































































Brief Summary of Minutes

Jim Christensen: opening comments. Jim reported that the new proposal was officially accepted as WERA1010 but there had been criticisms of the outreach components. He underscored the need to document outreach activities. The committee discussed ways to make sure the usefulness of the surveys.The WERA1010 committee members agreed to address this issue and made it an agenda item. We are approved for the next five years.


The meeting ended by discussing topics for next year.

IRB. Courtney: we are picking up on variation in IRB requirements. What should we do? Steve recalled that the job of IRB is not to re-write research proposals but to protect human subjects. What is acceptable at different institutions? John: Can we take inventory and see the differences? Don: one way to move ahead is to make it an agenda item.

Don: another agenda item: how do experiments get done?

Open ended. How should it be conceptualize and what are the cells in the study? Think about the variables and how they should link together. What are the response variables: words, themes, elaborations, etc. [Courtney: how do you count words? Steve: n spaces +1; Ginny: count. John: count].
What are the predictors: size, box vs ~box, lines vs ~lines, motived vs. ~motivated. [Steve: early respondents have more words]. Gender and handwriting may be variables. Just mail. There is a big set of issues.

Delivery sequence files (DSF). Ginny: Any further studies? Not certain at this time.

The Committee has tentatively agreed to meet again February 25  26, 2010 at the same place, the Best Western on Oracle, Tucson.

Accomplishments

Postal delivery sequence, compared with RDD. The committee began discussions by addressing issues related to the decline in telephone as a means of collecting survey data. In 2008 about 18% of household nationally have no land lines, and the proportion is getting lower all the time. One alternative is to add the postal delivery sequence file (USPS DSF) to sampling frame. USPS delivery sequence files (DSF) have 97% coverage, and it can be used to deliver mail questionnaires, but it does not have names. One theme in the discussion that followed was to use the DSF to improve internet coverage. Don Dillman and Ginny Lesser noted that coverage using the internet is a serious problem; only about 62% of households have internet access in the home. <br /> <br /> Mixed mode web surveys. Don reported on joint research with Benjamin Messer on the effectiveness of mail contact procedures in obtaining survey response by web. Don circulated his power-point presentation and a copy of the Washington Community Survey (WCS) questionnaire on which his presentation was based. A major theme of his report  and the meetings in general  concerned comparisons of telephone, mail, and web or web/mail surveys. In an earlier survey reported last year, Don and his colleagues used DSF to obtain statewide sample of household in Washington, and they obtained a 70% response on mail questionnaire. In this study, they looked at mail and web combinations, and they also used incentives in a factorial design. They personalized questionnaires for the four regions of the state by naming the region on the cover. The web and paper questionnaires looked the same. When contacted, respondents had a choice of mail or internet, and detailed internet contact instructions were provided. Don included a handout with the results on response rates by mode (mail vs wed) with and w/o incentives. The essential findings were that mail questionnaires got a higher response rate than web, and that incentives worked to increase all response rates and especially internet responses. One conclusion is that we cant just go to the web, because the demographics are very different. Don summarized his major findings: (1) incentives ($5) improve web response rates by about 7%; (2) URL works about as well as web card; (3) no difference by time of year; (4) withholding mail questionnaires drives significant numbers to the web; (5) mail alone has the highest response rate; (6) non-response error is substantial along demographic lines, especially among internet respondents, and (7) item non-response was about the same. Don concluded with some thoughts what to do next. Additional research could address the following design variations: (1) try different topics; (2) implement more intensive follow up procedures; (3) try longer and more complicated questionnaires.<br /> <br /> Glenn Israel continued this theme by discussing his work on extension customer satisfaction over the past 6 years as it relates to differences in mode. His extension service is interested in web surveys and enthused about Survey Monkey. In the past, his surveys have manipulated spacing (box size) for open-ended questions. This year his research asked questions relating to reducing data collection costs, quality differences by mode, and demographic effects by mode. Glenn explained his research design. The options for the 1st contact were: (1) mail only; (2) mail & web choice; and (3) web preference with URL and pin number only. In the 2nd contact, the 3rd group had a questionnaire option. There were differences by mode: mail was over 60 and the web preference group had 45% via mail on second contact. All groups ended up nearly the same but the web had higher earlier rates. Overall, Glenn realized a small net savings by including web surveys because 130 respondents quickly completed web surveys and required no costly follow up.. Item non-response was 3  4% by all modes and there were no significant differences. Demographics variables  age & sex & ethnic  were examined, and the only difference was due to age and sex. Younger respondents and women were more likely to complete the web surveys. In addition, higher educated and urban respondents were more likely to use the web. Web respondents were more likely to use extension and more likely to have gone to extensions solutions for your life website. On satisfaction: web clients are more likely to say very satisfied, compared to satisfied or less, but not by much.<br /> <br /> Glenns questionnaires also contained 2 open-ended questions. Overall, over a series of questions, web respondents gave more words and more elaborations to a question about problem solutions and more words and themes in response to a question about suggestions for improvement. Glenn connected it to leverage-salience theory, which argues that the invitation to give feedback might have more importance to some clients than others. Thus, the reason web respondents gave more extensive responses may have been because they were more likely to have been users of extension services. Glenn also concluded that web surveys cannot be done alone (w/o mail follow-up); however data quality was the same and some errors were reduced. One additional thought: the web cover letter is more complex and more negative, and that may affect responses. Discussion followed regarding the best way to design mixed mode studies. <br /> <br /> Ginny next reported on three studies using the USPS DSF. They were all studies of Oregon populations and sampling. The 2008 Department of Transportation Study had the following numbers and design: 1000 each send out by (1) mail, (2) web/mail w/letter instructions and (3) web/mail with special 5-step instructions, and (4) telephone. The four response rates: 35.5%; 28.5%, 22.8% and 31.4%, respectively. Ginny passed around the special, detailed instructions which were compared to standard instructions. The special instructions did worst! When it comes to missing data, the mail and web/mail were trivial; no real difference. Which mode works best in reflecting the states demographic distribution? For the 2008 survey, and the 2006 survey that was similar to it, mail and web/mail response rates are much more similar to the sex distribution of the population than telephone. For employment, telephone was more in line with population than mail or web. Ginny also discovered that there were differences between modes in response to questions, and the differences were with the telephone where the 1st response category is selected relatively more often (primacy effects). <br /> <br /> What about cost? We know what it costs to do a phone survey, and Ginny knows what it takes to get the same number of mail and mail/web responses. The cost ratios of mail to phone is 52.7% and the web-mail to phone is 46%, so it is cheaper to do mail and even cheaper to do a web-mail. Ginnys cost: phone is $45.44; mail is $23.96 and web-mail is $20.89. The web-mail is cheaper because those who first answer the web do not get a mail follow-up. The web-mail advantage is greater for large sample surveys. Bottom line: why web? Costs! Postage and printing are expensive. If you can effectively get people to use the web, the costs of surveys decline. <br /> <br /> In a follow up discussion, Ginny pointed out that post-stratification weighting remains important: in all analyses, it is important to account for demographic differences between sample and population for all modes by post-stratifying the sample. <br /> <br /> In Ginnys 2nd study, she reduced the 5 step web instructions to a 2 point instruction and the results are mail (31.1%); web/mail w/o insert (26.0%) and web w/insert (21.1%). When examining response rates among the youngest age group (18  25), which constitute 13.5% of the population, she recorded response rates of 1% by mail, 3.1% among the web/mail w/o insert group, and 0.08% among the web/mail with insert.<br /> <br /> Ginny concluded with a study of travel experiences of the disables. She obtained 1043 responses from MS patients who have flown on a plane. They had been randomized into mail only, mail/web and web/mail. Of these, 754 had both mail and email. Number of completes after the 1st mailing was highest by mail (162/251 = 65%), next was mail/web (127/251 = 55%) and web/mail (40/252 = 16%). The final response rates were about the same (73-77%) for the three groups.<br /> <br /> A discussion followed about ideal circumstances to compare modes. One place where you could have a good comparison between web and mail surveys is with distance education participants, where web is part of daily life. Compare mail only vs. web only. You could add mail/web and web/mail. In further discussion, Don reported asking participants in focus groups why they prefer mail questionnaires. Responses: accessibility; tactile handling; easier and more comfortable. He also asked why people who choose webs do so. Response: the same, plus right to multitask.<br /> <br /> Open-ended survey questions: Courtney Flint prepared a study comparing open-ended questions with lines and with boxes. Her study was done on two different forestry surveys in Illinois. The surveys were distributed to some who participated in 2004, and some first time respondents, but they did not randomize into line and space groups. The survey instruments had boxes vs. lines. With respect to the open-ended questions near the end of the questionnaire, respondents were much more likely to offer responses if they have lines (21.9% vs. 9.7%). There were no differences in number of characters. <br /> <br /> John Allen reported having addressed the same issue. John collected data in east central Utah. He and his colleagues compared box vs. no box and lines vs. box. About 63% of the respondents were male, which is not the same as most surveys report. The open-ended questions had to do with positive and negative themes, which were elaborated from simple to complex themes. So, does box vs. line matter? They did not find significant differences in the number of words written between box and no box or between box and lines. Women were more likely to provide more responses. Discussion followed that related to the conditions under which box size makes a difference and under which conditions lines encourage more responses and more complex responses. <br /> <br /> In response to this discussion, Glenn Israel recalled using a 2x2 design to examine box vs ~box and extra vs. ~extra verbal instructions. There were 2 open-ended questions and they measured the number of words, themes, and elaborations. The box generated more words, themes and elaborations for one question (how problem was solved). When the extra verbal instructions were added, it had significant effects favoring boxes. The number of words and level of elaboration had significant effects; themes did not. Glenn hasnt sorted all of this through yet, but the boxes seem to generate more responses. <br /> <br /> Ginny did two open-ended experiments, each with 3 questions. Her studies had 2x2 factorial designs: box vs lines and lines vs nothing. For average number of characters, only one question of 3 had an effect and the two ~box treatments had more characters. For the second experiment, she compared boxes and no boxes, and nothing was significant. Overall, more words and commentary seem to follow from ~box condition. <br /> <br /> State reports<br /> <br /> Steve Swinford (Montana) discussed web-based surveys and included an experiment on the middle category. Two panel designed studies are coming up, one on gender roles, sexual attitudes and alcohol behaviors leading to a planned intervention. There is room for several general/specific items. The 2nd is a first year initiative study and they are testing some scales. The sample size is likely to be in the 300  400 range. His class is currently doing an evaluation of a text book. Some of the questionnaires have lines (with 88 points) and some have 5 point SD  SA; they can be compared.<br /> <br /> Don (Washington) circulated his state report outlining his current studies and his plans for the coming year. Overall, he documented a 10 point summary of work in progress and planned work. To highlight, Don points out that he will be moving away from issues relating to visual design and instead focus on ways of using addressed-based sampling to improve survey response rates and keep non-response low. He is planning research on whether offering respondents a choice of modes  mail and web  lowers or increases response rates. <br /> <br /> Shorna Broussard Allred (New York) reported on several themes. Three projects next year are (1) survey of initial officials in NY, half for whom have e-mail addresses; (2) a survey of forest owners; and (3) the human dimensions research is dealing with recall bias. Shorna handed out two tables. One table showed response rates at different times of year, ranging from 37.9% - 48.8%. Surveys conducted over portions of the year (seasonal phase surveys) focused on attitude questions and some descriptive data, whereas the annual survey covering the whole year focused on descriptive aspects of fishing (where; when; how much; etc). Each phase sample and the annual sample are independent and drawn from license purchases. Questions and answers followed. Question: can we create a correction factor to get at overestimation and underestimation? Ginny: any cognitive interviewing of fishing recall? Shorna: the advice was provided by the bureaucrats rather than the fishermen, and they suggested 3 recall periods per year. Ginny: annual estimates are lower than the sum of the shorter term recalls. Why? Ginny: Have diaries been tried? Shorna: No? Don: the problem with diaries is recovery; they are often lost, etc. Concluding advice: the changes obtained from the seasonal reports may be valuable even if the absolute values may not be accurate.<br /> <br /> Marilyn Smith (Nevada) is an extension specialist and has been successful in outreach. Some of the examples of surveys include providing assistance to other field staff. She offered an example of how surveys sometimes are implemented in rural areas: in one small rural place, the postmaster put a survey in each box in the post office at no cost. Result: a good response rate. Marilyn repored that she gets calls from others to help with impact assessment. She has a program through the BLM to provide programming to develop work skills among young people. These programs offer special challenges for researchers. For example, one funding agency requires Marilyn and her staff to use the agencys survey instrument, and it has bad questions such as how wrong is it to use drugs? On some occasions, questions about neighborhood are interpreted by adolescents as their gang.<br /> <br /> Courtney Flint (Illinois) reported on some of her recent surveys. They included surveys on the Kenai Bark Beetle re-study in 6 communities, with a 42% response rate and some panel data. The panel respondents changed less than the difference between new (wave 2) and old respondents (wave 1). Another study, the Southern Illinois Private Forest Landowner survey, was conducted in parcels of forests, and had a 48% response rate. She used Survey Monkey to conduct a community emergency response team survey, with a 28% response rate. It was a national survey. A rural China village household survey was done by a Chinese graduate student, and it had recall issues dealing with sustaining ones household. They had a 95% response rate. Courtney taught a field methods class in which they did face of face survey at the Deer Festival in Golconda. It was a great lesson in sources of error. In an Alaska Tribal survey of three communities on berries and climate change (funded by the EPA), they are doing a drop off and pick up survey as well as a mail component. Are the methods compatible? In some communities, they want to use both methods in order to involve children of the community. She outlined several surveys that are planned for next year. Examples included a Colorado Mountain Pine Beetle restudy in nine Colorado counties. <br /> <br /> Glen Israel (Florida) elaborated on the analysis from yesterday. He found some evidence of an interaction effect that the propensity to respond to an open-ended questions. Whether a box was used when provided on whether the respondent was male or female, with females using the boxes more often. He also looked at some of the distributions and found outliers: three respondents wrote enormous numbers of words and skewed the means. The number of words on the web was higher than on mail questionnaire. <br /> <br /> Rob Robertson (New Hampshire) reported on the burden of federal regulation requirements for information from fisherman. He asked about their preferences for ways of giving information: 57% preferred mail survey from among the 65% who responded. Those who preferred internet surveys completed more internet (4.1/year) compared with 1 per year. Those who preferred mail surveys also completed more mail surveys, etc. Rob continues to work with others on virtual rule making, through a Wickapedia site which solicits public comments. He did a large watershed survey (3000) on one mailing.<br /> <br /> Bob Mason (Oregon) is looking at marketing studies, especially at Oregon State. It is promoting its nil expertise and doing seemingly unethical studies, especially in dealing with smoking.<br /> <br /> Dave Peters (Iowa/Nebraska) reported on a survey where he oversamples some communities with large fractions of underrepresented populations. Dave is also on an NRI grant on recruitment and retention of employers. The surveys will oversample smaller communities and administer a survey including 10 occupation categories and the number of openings in each. It also asks about job requirements and skills. Dave showed us the survey instrument they are developing. They will likely develop two forms of the survey, one shorter than the other. The survey is conducted by the University of Nebraska.<br /> <br /> Fred Lorenz (Iowa) reported on Ginnys Oregon DOT surveys that have a general question that precedes and follows six specific items. We had four replications. Consistent with previous studies, means on the general were higher in the S/G condition than in the G/S condition. When regressing the general against the specific items, the R-square was higher in the S/G condition than the G/S. There was no convincing evidence of a spillover from the general question in the G/S condition and no evidence of recency effects in the S/G condition. <br /> <br /> Ginny Lesser (Oregon) reported on changing response rates over the past decade. The adjusted response scores continue to decline at a 1.8% decrease over the decade (since June 2001) but the rate of decline seems to be coming to an end. The decline is for all age groups, but lowest responses found in the youngest age group. The equation reads: Y = 0.51  0.0015T (Rsq = 0.539). Ginny also reported on her work with evaluating the effectiveness of probability sampling, published in Environmental Monitoring and Assessment (2009), and in applying multiple imputation to account for item non-response.<br /> <br /> John Allen (Utah) has seven surveys in process, and it reflects work on emerging entrepreneurs with Korsching (Iowa). Other surveys may involve web/mail differences that involve snowball sampling that has a combination of mail and web. It is on formal and informal leadership structure. Another project involves the adaption and diffusion of new oil drilling technology. Studies are also being doing in China and the US, and extending to Russia and other places. <br />

Publications

Impact Statements

Back to top

Date of Annual Report: 04/12/2010

Report Information

Annual Meeting Dates: 02/25/2010 - 02/26/2010
Period the Report Covers: 10/01/2009 - 09/01/2010

Participants

Minutes
WERA 1010: Error Reduction in Rural and Agricultural Experiments
Februrary 25-26, 2010
The 2010 annual meeting of WERA 1010 was convened by Chair Ginny Lesser at 8:15am on Thursday, February 25 at Tucson InnSuites. Present were:

Virginia Lesser (Chair: Oregon) lesser@stat.orst.edu
Jim Christenson (AES Advisor: Arizona) jimc@ag.arizona.edu

Patricia Hipple (USDA) PHIPPLE@nifa.usda.gov
John Allen (Utah State) johna@ext.usu.edu
Shorna Broussard Allred (New York (Cornell)) srb237@cornell.edu
Don Dillman (Washington) dillman@wsu.edu
Glenn Israel (Florida) disrael@ufl.edu
Fred Lorenz (Secretary: Iowa) folorenz@iastate.edu
Dave Peters (Iowa) dpeters@iastate.edu
Marilyn Smith (Nevada) smithm@unce.unr.edu
Steve Swinford (Montana) swinford@montana.edu
Fern Willits (Penn State) fkw@psu.edu

Still interested but couldnt make it are:
Angela Mertig (Tennessee: Middle Tennessee State) amertig@mtsu.edu
Nick Place (University of Maryland)
Wuyang Hu (University of Kentucky)
Rob Robertson (New Hampshire) robertr@cisuix.unh.edu
Todd Rockwood (Minnesota) trockwood@mn.rr.com
John Saltiel (Montana) jsaltiel@.gmail.com
Courtney Flint (University of Illinois (Champaign-Urbana))
Bob Mason (Oregon State) masonr@stat.orst.edu

Deceased (2009):
Robie Sangster (BLS)

Brief Summary of Minutes

Agenda:
Welcome
Studies on mixed model surveys
Studies on open-ended questions
General/specific questions.
Report on: National Institute of Food and Agriculture (NIFA)
Mail response rate trends
Remaining state reports
Discussion of outreach activities
Other activities

Introductions Committee members introduced themselves and their affiliations. A special welcome back was extended to Fern (Bunny) Willits, who failed in her attempt to retire.

Mixed mode surveys: Don Dillman reported on response rates by mode of contact, and referred to his first handout, where the 1st slide is labeled Using address-based sampling to conduct mail and web surveys: Results from three studies. First, Don noted that address-based sampling (ABS) is likely providing better coverage than random digit dialing (RDD). Don introduced his studies comparing mail and web surveys. He focused on his most recent of three studies, the Washington Economic Survey, conducted in the Fall of 2009, which had a 12 page questionnaire with 46 questions and up to 95 responses. It had 6 treatment groups and tested the effect of two $5 incentives, one with the initial request, and a second at the time the replacement questionnaire or web access information was circulated. It also tested the effect of priority mail. The treatments were: (1) web preference (n = 700); (2) web preference & priority mail; (3) web preference, priority mail & second $5 incentive; (4) mail only; (5) priority mail only; (6) priority mail only & second $5 incentive. The sample sizes were 600 for the 1st three groups and 700 for the last three groups. The response rates were:

" The mail only group using priority mail plus a second $5 incentive had a 68% response rate whereas the web preference with priority mail plus the second $5 incentive had 34% by web and an additional 18% by mail follow up (52% total).
" The initial $5 incentive sent with the survey request had been shown to be important in a 2008 survey he summarized. In that study a mail preference treatment was increased13.6 percentage points by the use of a $5 incentive and the web preference group was increased by 20.6 percentage points.
" When priority mail was sent with the additional incentive in the current experiment, the response rate increased another 4.4% in the web preference group and 9.6% in the mail only groups suggesting this treatment could be used in an effort to maximize response rates.

Don pointed out that web and mail together brings out a larger array of respondents. However, both web and mail surveys under-represent lower education and lower income respondents, although mail appears to be closest to the general public. Ginny advocated for sample weighting to correct for biases associated education and income and this will be done by Don as analysis continues to the extent sample sizes allow.

A general discussion followed. One comment was that incentives save money and make studies more valid by improving representativeness. Incentives have to be used ahead of time; they dont work if they are promised once the questionnaire is returned. Ginny says she cant use incentives in state agencies in Oregon for political reasons, at least at this time. OMB is allowing incentives, but regulates how much as well as the way they can be used. We also noted that you can approach people at random by telephone (a public utility) and by mail (a government agency) but not by email, which is private. This is the reason that addressed-based sampling using the U.S. Postal Service Delivery Sequence File is so important. Ginny noted from her experience that incentives increase response rates by 20%, but Don added that they may not reduce non-response error.

Don went on to discuss choice between mail and web, using a 2nd handout, Improving response for web and mail mixed-mode survey: the effects of mode choice, offering modes in sequence, and adding email contacts. What happens when you give choice? Summary: if you give a choice, respondents go with mail. If you give people too many choices, then it gets more complicated and participation goes down. In Dons study, one concern was that he needed a sample that had email, so he used random samples of university students. Don outlined the experiment. See the 4 treatments (each of 700 students) on page 2 of the 2nd handout: (1) postal mail request to respond by mode of choice, web or mail; (2) postal mail request to respond by mail; (3) postal mail with request to respond by web; and (4) postal request to respond by web, with link to website sent by email 3 days later. Then, when the response rate flattened, so that few new responses were coming in, Don added a mode switch in which the choice group (group 1) received another request to participate, the mail group (group 2) received a request to participate via mail, and groups 3 and 4 received a request to participate by mail. Over the whole experiment, primary response rates were 50.5%; and mode switch added another 4.7% to 55.2%. The largest increase came in the 3rd group, when the web mail group was asked to participate by mail (7.5%).

The next experiment followed up with more treatment groups. This Fall 2009 Treatment Groups, Contact methods and incentives included a wider range of treatment groups from email contact with no incentive to the use of intermingled postal and email contacts. The response rate by email only (without an incentive) was 20%. Use of an incentive delivered by an initial postal contact brought response up to 38%. The intermingling of contacts by mail and web brought response up to 46%. The best response came from offering choice with e-mail augmentation, i.e. a sequence of postal pre-notice, postal request, email follow up, postal replacement question, email follow-up. Although this opportunity to respond by either mail or web combined with email augmentation was not significantly higher that conducting the survey by mail questionnaire alone. Discussion followed.

Ginny Lesser followed, continuing the discussion of response rates by different modes in her Department of Transportation studies. Ginny reported on 7 mixed mode surveys, none of which offered incentives because all were conducted for state agencies which did not allow use of incentives. She has phone, mail, and web/mail approaches in the 2006 and 2008 ODOT studies. The phone mode was dropped after 2008 because the response rates were low and the telephone method was expensive. For each survey, sample sizes of 1000 in each group. Among her results, response rates in 2006 and 2008 were around 30%. ODOT pre-letters are worth about 4-5%. Telephone is too costly and was dropped. For web and mail, letter instructions improve responses (28.5%) over special inserts, like the cards used by Don in one of his studies (22.8%).

Two studies were of licensed boat owners, where boats are either longer or shorter than 27 feet (Ns are over 3300 in each group). The objective: determine the annual amount of gasoline consumed by boats. Ginny outlined alternatives modes, including mail, web/mail and web/mail-option (paper or web). One objective: keep the number of contacts in each group the same. Thus, if the respondent had no internet access, then respondents also received 2 contacts with paper questionnaire. Ginny showed complex results. Response rates by web alone are very low. But mail with web mail option brings in slightly less than just mail and web. When the number of contacts to the groups was kept the same, the mail method provided the highest response rates. When the number of contacts was increased by one mailing for the web/mail group, this mode provided the highest response rates. From this, there are no significant differences between mailings when all the combinations are included. The bottom line: to get people to web, need a sequential strategy. One point is that if you have small sample sizes in surveys, using the web may not be the most efficient approach given the amount of cost to put the questionnaire on the web.

What is the difference between Don and Ginny? Don: the numbers seem to be in the same ballpark. The patterns are the same in several studies. John asked: how long is the questionnaire? Ginny: two pages. Discussion follows. Shorna asked about backlash. They have been monitoring, and multiple contacts draw more negative comments. Shorna noted that a clause that says, if you respond, then no more follow-ups, worked to increase response.

Glenn Israel followed with mixed mode experiments. Glen reminded us about his 2008 experiment (n = 1318): mail only, mail/web choice and web preference. The studies are of cooperative extension service (CES) clients, and many of them are regular email users. Responses are similar to other studies: mail only was highest (65%), followed by mail with web choice (59.2%) and web preference (52.6%). The 2009 survey was mail only, e-mail preference (letter to alert; e-mail invitation to link; email reminder; reminder letter + paper questionnaire) and web preference (letter; invitation letter with URL and pin; standard reminder; reminder letter with URL and replacement questionnaire). Total sample size was over 1400, but only 430 provided an e-mail address. The e-mail preference group had highest response rate (63.5%); mail only was 56.3% and the web preference group was 48.2%.There are some modest differences in who responds, with different profiles by mode of survey; example, sex and residence predicts early respondent. People who respond early reported having visited CES websites. People who do not respond early, or who do not visit CES website, tend to be older and female. Implication: people who provide e-mail address, go with e-mail invitation and follow with paper are younger.

Glenn moved next to open ended questions. Glen had a handout with questionnaires in Spanish and English. There were 4 forms of a two page questionnaire, with experiments built into questions 5 (1st page) and 11 (2nd page). He noted that more Hispanics responded in English than in Spanish! Glenn framed his discussion in terms of Grices (1975) maxims: relationship, quality, quantity, and manner. He is thinking about how verbal and visual elements in open ended questions as a device to create conversation. There was a paper by Sundman, Bradburn & Schwarz (1996) in which maxims refer to quantity, completeness and manner. Idea: ask questions that elicit responses. Form A was a standard box and basic question (Q5b: Please explain why it did or did not sove the problem or answer your question.); form B elaborated the question to get at norms of relevance and quantity (Q5b: Please explain what your information need or problem was, what you did with the information, and what the results were.); form C addressed visual design by breaking Q5b down into 3 distinct question and answer spaces; and Form D repeated the general question in Form A but added a verbal prompt (It is very important for us to understand as much as we can about the use of information provided by the Extension office.). Question 11 was designed as a parallel test, where Form A asked the question What can we do to improve our services. Data were collected this past summer. To gage how good the answers are, Glenn created an optimality index (quantity, manner and structure; R = relation):

Index = R*(Q + M + S).

The analysis is not yet done. About 60% responded to open ended questions. Discussion followed about where to go from here. Glenn is working on a paper for the Rural Sociological Society meetings in August 2010.

Shorna Broussard Allred discussed two experiments from her research on distance learning in forestry education. One is a watershed in New York where they compared those who live close to Wappinger Creek vs. further away. Shorna showed examples of the questionnaires, and provided a summary table of response rates. There was a random assignment to questionnaire treatments. One concern: better response rates from upstate than downstate NY. The results are in the handout. Using APOR response rate #6, overall response rate was low (26%); the rates were higher for riparian owners (28.9% vs 23.4%), but color vs black & white questionnaire design made no significant difference. Overall color had about a 1% effect (26.5 vs 25.5%). Black & white booklets were less expensive and smaller envelops cost less. A discussion of color followed: Ginny recalled some surveys where there is an effect due to color (up to 8 percent) but most differences were pretty small.

Shornas 2nd study is on distance learning and forestry education. The purpose was to evaluate distance learning, and the data base included 1099 names. The survey was done by web only, and she got 522 responses (46%). Shorna provided a handout, where Table 1 offered a summary of the methodological design. Of the 1099 names, some had e-mail addresses (see Table 1). The results are in Table 2. Notice the differences between groups 1 and 2: email only resulted in a 44% response rate; email + advance post card + reminder postcard yielded a 54% response rate. Conclusion: you can boost response rates by using e-mail and postal advance. Discussion followed. Don noted that if there is data in the register, than we can do some demographics to see if respondents differ from non-respondents.

John Allen did not have new experiments but he discussed his studies on barriers to adopting energy efficient methods. One study is about knowledge of available technologies. One study is with industry and they cant get hold of industry executives through mail. John has one contract with Department of Transportation; going with indigenous plants on the median, due to water concerns. No incentive allowed. Another contract is in Peru. Biggest challenges: response rates in face of budget cuts.

Fred Lorenz discussed general-specific experiments, using data from Iowa communities and four replications of Iowa Department of Transportation data. He presented material from last year and then added a SEM model in which the specific items are treated as manifestations of a latent variable. He then compared models that looked at the effects of the specific latent variable on the general question, depending on whether the general preceded or followed the specific latent variable. The results were not significant, indicating that the extent to which the general question was explained by specific items was not sensitive to question order. One extension of the model is to look at the 1st order correlations between specific items. He found that, for Oregon DOT data, there were strong 1st order correlations, suggesting that the 2nd item on the specific list is influenced by the response to the 1st item, and the 3rd item is influenced by the 2nd item, etc. This work is continuing and will be presented at the Rural Sociological Society meetings in August 2010.

Steve Swinford presented his state report. No new data, but he has outreach. First, Steve is working with Most of Us, a social norm program relating to alcohol and drug use; i.e., Most of Us dont do drugs. This study is run through the schools, and Steve is involved. General specific questions are a possibility, particularly on the web site. Second, they are worried about honesty; they ask about a fictitious drug. Third, there are social and epidemiological measures of drug use, and they are interested in correlating the two.

Steve is also involved in outreach. One is a survey of pay levels of city officials. They are also interested in surveying a wide range of government officials, especially through the associations of county governments and municipalities. Steve will continue to work on the design aspects of the studies. He also had students who presented papers in Chicago (Midwest Soc), and one student did an experiment on male student masculinity and reports of sexual activity.

Dave Peters talked about labor vacancy studies (NRI) in cooperation with the University of Nebraska, NDSU and SDSU. The surveys address labor market shortages in rural communities (e.g., western Nebraska). They looked as selected communities where the demand for jobs exceeds supply. Idea: develop recruiting strategies. The survey instrument was developed from BLS items and identified 4 occupational groups: professional & management; production; administrative, sales and services; and healthcare support. Questions were on job openings, benefits, recruitment efforts and retention, etc., many were openended. The discussion turned to the design of the questionnaire. One problem with the survey was that the 2 column layout of the questionnaire was too complicated. Don suggested some spacing techniques for making the questions flow better. Data were collected by mail to employers, using 5 contacts, including telephone calls to the 25 largest employers. The study also had community advisory committee (CAC) lists to identify missing employers and key people w/in organizations. Response rates varied by community, from 100% on 4 respondents in one community on down to single digits, often for mid size firms, which is where there are often a lot of jobs. There were high levels of non-response from city and county government; restaurants, and banks were examples of groups that were slow to cooperate and did not response. Small employers w/o vacancies often did not respond. Measurement error resulted from ambiguity about how to define occupation (eg., are nurses professional or healthcare support). There is also an effect of the recession on responses, which may have made the survey atypical.

Dave will work with an extension survey this summer. John: what about ownership; a local owner vs. large corporations. Don: can we go back, and can we improve response rates? Approach it differently. The BLS surveys start by making calls (telephone contacts) to employers. They are careful from the beginning, and then you can get good rates. To handle companies with w/o vacancies, tell them up-front that it is still important. Ginny reiterated by underscoring initial phone calls to find out whom to send the questionnaires to.

Marilyn Smith reported on her cooperative extension work in Nevada, especially the local application of surveys. She provided several examples of reports, including one award-winning report, that show how survey research results were being disseminated. Researchers are interested in needs assessment in communities that are largely miners rather than the traditional agriculture. Marilyn assists younger professors. Marilyn also talked about impact evaluations (see Involving youth in community emergency preparedness: Impact of a multistate initiative). In this study, she looked at immediate, 6 month and year long impacts; see especially tables 2 and 3, etc. There are some implications: survey research has an important role in cooperative extension evaluation. One handout (on 4-H Bootstraps) is for youths ages 18 -25 who are put to work on public lands. Many have dropped out of school and they are encouraging re-entry into school; notice the knowledge gained schedules (Table 2), and then what happens (subsequent tables). Discussion followed. Ginnys comment: Marilyn provides the much needed outlet. Marilyn answered questions on the context and goals of the extension based programs to involve more and more non-profits.

Patricia Hipple reported on changes in the USDA, as summarized by the National Institute for Food and Agriculture (NIFA) factsheet that she provided. By way of background, the USDA is undergoing major changes as a result of the 2008 farm bill. It did away with the National Research Initiative (NRI) that was languishing, at least in comparison with NSF and NIH. The Danforth study suggested that the NRI be drawn out of the USDA and become a separate Institute, the place to fund outstanding research. But concern about formula funds and Hatch dollars led to create 2100. Overall, the NIFA is replacing CRESS, and the proposal is to build the NIFA into a competitive research institute with exponential growth.

The agriculture & food research initiative (AFRI) will be different from NRI. The NRI had 31 competitive programs that were roughly discipline specific. Over the years, they tried to include extension and education, and that is the model for AFRI, a set of integrated competitive opportunities. There will be 7 RFA, released in mid-March. The handout identifies 5 of the 7: global food security and hunger; climate change; sustainable energy; childhood quality; and food safety. The last two are not yet known, but one is likely on fundamental plant/animal research. The 7 areas are broad and society based. As an important departure from the past, individual scholars will not submit proposals. The awards will be hugh ($2  10 million) and they will go to teams. There will be hundred, rather than thousands, of applications. Each will be expected to have a significant social science component. Social scientists are well positioned to lead projects, compared to bench scientists. The AFRI is addressing societal problems. On the bottom of the handout, note  form does not follow function. All staff members of AFRI are being re-organized into 4 institutes and one center, not directly aligned with the 5 RFAs. The institutes and centers are the Institutes of Food Production and Sustainability; Institute of Bio-energy, Climate, and Environment; Institute for Food Safety and Nutrition; Institute of Youth, Family and Community; and Center for International Programs.

There are still questions about coordinating committees, Hatch dollars, etc. Patricia will be assigned to one of the 4 institutes. Shorna: What is happening to Hatch dollars, extension allocations, etc? Patricia: they are being negotiated. The Center for International Programs was in its ascendancy because one of the undersecretaries was pushing to feed the world. But that undersecretary has since become head of USAID. Best advice: work through people you know. Discussion followed. Patricia noted that our research group (WERA 1010) is precariously positioned to either take leadership on a group or be servants for biophysical scientists. Thus, this group could guide NIFA and others in the land grant so that surveys are done with high quality. John: Hatch funds? The lobbies have been effective and the Hatch funds are secure for a while. But all land grants received a letter directing them to align with the priorities. Time was spend discussing the implications and strategies for getting ahead as social and survey scientists. One theme: survey research is important; we need to provide guidelines to insure that it is done well. Ginny suggested that we go the OMB website on standardized practices on surveys. Its done. We agreed that as a group we should distill the 29 page OMB document. Jim Christenson and Patricia will focus it. The way that it is expressed is in terms of the human and social implications of the research.

The last topic is to return to reduction in response rates. Ginny showed response rates for Oregon for Dept. of Motor Vehicles (DMV) each month since April, 1994. The data show declines, with response rates going from about 70%. She used a time series analysis. She fit a piecewise linear trend with 3 pieces. The 1st segment (prior to Feb 2001) used a one page questionnaire with 4 mailing contacts; the second segment moved to a two page questionnaire, and used a 3 contact approach by dropping the postcard, and changed the first contact to a preletter (March 2001-July 2003); and then returning to 4 contacts (bringing the post cards back in). The model takes into accounts 2nd order autoregressive model. Other factors were not significant, including minor consent, number of questions, and question about identification of respondent. So what is the decline? Prior to 2001, 1.42% decline in response rates/year; between 2001 and 2003 it is 7% decline per year and now 0.6% decline per year. The trends apply to all groups: men have lower response rates than women; young have lower response rates than than older individuals, etc. There are minor variations if you add seasonal components to the time series.

Don argues that response rates may not be declining. His data are from national parks using in-person delivery and mail only response of visitors to National Parks. Don described the survey, for which procedures remained the same over the twenty years, but sponsors increased the number of items, number of pages (12 to 16), and number of items per page over a twenty year period. They also increased number of replacement questionnaires to two. . Response rates correlated negatively with number of pages and items, . The average response rates have declined from 80% in late 1980s to about 70% in recent years. The overall mean response across 20 years has been 76%. When they bring in measures of salience, about 46% of variance is explained, and year adds 4%. Because of differences in pages, number of items and the use of replacements, its difficult to write-up a definitive analysis. Work on this paper will continue.

The meeting ended with a discussion of topics for next year.

A list of publications printed in 2009 are given in Appendix A.

Meeting adjourned. Next meeting will be February 24  25, 2011.

Minutes are submitted by Fred Lorenz

Accomplishments

Publications listed below show the accomplishments of the group. The meeting ended with a discussion of future work, publications, and impacts

Publications

2009 Publication List <br /> <br /> <br /> 1. Martin, Elizabeth Ann and Don A. Dillman. 2008 (published in 2009). Does a Final Coverage Check Identify and Reduce Census Coverage Errors? Journal of Official Statistics. 24 (4): 571-589.<br /> <br /> 2. Rookey, Bryan D., Hanway, Steve, and Dillman, Don A. 2008 (Published in 2009). Does a Probability-Based Household Panel Benefit from Assignment to Postal Response as an Alternative to internet-only? Public Opinion Quarterly. 72(5): 962-984.<br /> <br /> 3. Dillman, Don A., Jolene D. Smyth and Leah Melani Christian. 2009. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method, 3rd edition. John Wiley: Hoboken, NJ 499 pp.<br /> <br /> 4. Dillman, D.A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., & Messer, B.L. (2009). Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response, and the Internet. Social Science Research, 38(1), 1-18.<br /> <br /> 5. Christian, Leah Melani, Nicholas L. Parsons and Don A. Dillman. 2009. Measurement in Web Surveys: the Importance of Visual Layout and Design. Sociological Methods and Research. 37(3): 393-425. <br /> <br /> 6. Dillman, Don A. 2009. Chapter 8. Some Consequences of Survey Mode Changes in Longitudinal Surveys. In Lynn, Peter et al. (eds.), Methodology of Longitudinal Surveys. John Wiley: London. Pp. 127-137.<br /> <br /> 7. Millar, Morgan M., Allison C. O'Neill and Don A. Dillman. 2009. Are Mode Preferences Real? Technical Report 09-003. Washington State University Social and Economic Sciences Research Center. Washington State University: Pullman. 52 pp. <br /> <br /> 8. Smyth, Jolene, Don A. Dillman, Leah Melani Christian and Mallory McBride. 2009. "Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?" Public Opinion Quarterly 73 (Summer): 325-337.<br /> <br /> 9. Munoz-Hernandez, B., V.M. Lesser, J. Dorney, and R.Savage. 2009. Survey methodology for assessing the map accuracy of geographically isolated wetlands. Environmental Monitoring and Assessment 150: 53-64.<br /> <br /> <br /> <br /> <br /> <br />

Impact Statements

  1. Higher quality survey research
  2. Setting standards for survey response rate expectations
Back to top

Date of Annual Report: 04/05/2011

Report Information

Annual Meeting Dates: 02/24/2011 - 02/25/2011
Period the Report Covers: 10/01/2009 - 10/01/2010

Participants

Virginia Lesser (Chair: Oregon) lesser@stat.orst.edu
Jim Christenson (AES Advisor: Arizona) jimc@ag.arizona.edu
Don Dillman (Washington) dillman@wsu.edu
Glenn Israel (Florida) gisrael@ufl.edu
Fred Lorenz (Secretary: Iowa) folorenz@iastate.edu
Todd Rockwood (Minnesota) rockw001@mn.edu
Steve Swinford (Montana) swinford@montana.edu

Brief Summary of Minutes

Still interested but could not make it are:

Patricia Hipple (USDA) PHIPPLE@nifa.usda.gov
John Allen (Utah State) johna@ext.usu.edu
Shorna Broussard Allred (New York (Cornell)) srb237@cornell.edu
Dave Peters (Iowa) dpeters@iastate.edu
Marilyn Smith (Nevada) smithm@unce.unr.edu
Fern Willits (Penn State) fkw@psu.edu

Angela Mertig (Tennessee: Middle Tennessee State) amertig@mtsu.edu
Nick Plance (University of Maryland) nplace@umd.edu
Wuyang Hu (University of Kentucky) wuyang.hu@uky.edu
Rob Robertson (New Hampshire) robertr@cisuix.unh.edu (email does not go through)
John Saltiel (Montana) jsaltiel@.gmail.com
Courtney Flint (University of Illinois (Champaign-Urbana)) cflint@uiuc.edu

Deceased:
Robie Sangster (BLS), 2009
Bob Mason (Oregon State), 2011

Agenda:
Welcome - Ginny Lesser
Administrative advisor - Jim Christenson
Studies on mixed model surveys - Dillman, Lesser, Israel
General/specific questions - Fred Lorenz
Studies on open-ended questions - Glenn Israel
Non response paper for AAPOR - Dillman, Lesser, Israel
Mail response rate trends - Ginny Lesser
Remaining state reports -
Discussion of outreach activities
Todd Rockwood on respondent driven sampling
Other activities


Ginny Lesser opened the meeting; we remembered Bob Mason, a founding member of this committee and an active participant through our February 2009 meeting. Our administrative advisor, Jim Christenson, announced that he is planning to retire in December, so we need to find another administrative advisor. Several names were suggested. The committee expressed its appreciation to Jim for his interest in our work and his hospitality.

The time and place of next year's meeting is being negotiated.

The notes that follow summarize important points in each power-point talk.

The discussion began with Don Dillman's studies of mixed model surveys, especially his efforts to meld web and mail surveys. Don discussed several results that drew from three papers, two in Public Opinion Quarterly and one in the American Behavioral Scientist. These papers are concerned with strategies for combining mail and web, often using email augmentation as a conduit to pushing respondents in the direction of web responses. The basic idea is to first send the questionnaire by mail and then follow up with an email and an electronic link. He also included incentives to one group. In one experiment, Don reported that when college student respondents obtained only postal contacts, the response rate to a web survey was 41%; when the postal contacts were augmented with an email the response rate increased to 54%. The essential strategy that came out of this is to send incentives via mail and then augment with emails.

A second study compared web and mail surveys using variations on mail and web contacts. One result was that in three separate surveys: one in the Lewiston/Clarkston region and two in the Washington community survey (2008) and Washington economic survey (2009) - the mail only mode had the highest response rates (57% - 71%). When the first contact was by web, the response rates were lower (31% - 41%) but increased when a later, 4th contact was made by mail (46% - 55%). The $5 incentive increased web + mail responses by 20.6% and mail only by 13.5%. In looking more closely at who responds, Don notes that compared with mail follow-up, web respondents tend to be younger, better educated with higher incomes, and are more likely to be married, have children and have internet access in their homes. However, when combined, the web + Mail respondents are very similar to mail-only respondents. In summary, Don noted that there are good coverage, response rate and non-response error reasons for using mixed-mode designs. Further, measurement differences may result owing to visual and question format differences. A subsequent discussion raised questions about going further, to smart phones, ipods, etc. These are new areas to explore.

Don reported on one more study, currently being conducted in Pennsylvania, Alabama and Washington on how people want their electricity (a tailored design study). The methodological twist was to use questionnaire covers that are tailored to state. Also, push harder to go to the web. Four treatments, each with 4 contacts: One is mail only; a small web push, a strong web push. Routine: (1) mail inviting participants to go to web, with $5; (2) reminder; (3) introduce paper with $5. Try to get a higher proportion to go the web. Push to web. If we can get the web demographics to better represent the population, then we can drop mail. Discussion followed.

Following up on Dons experiments, Glenn Israel reported on obtaining responses by mail and web. The 1st slides review the sequence of studies with extension customer satisfaction. Glenn's treatments were: mail only (standard practice since 2003); mail with web choice; and web preference (initial request included web only, but follow ups provided choice of web or mail). The mail only treatment had a 64.5% response rate, compared to the mail with web choice (51.4% by mail and 7.8% by web) and web preference (23.4% by mail and 29.2% by web). Glenn elaborated this in a 2nd survey in which he either had, or did not have, an email address. Among those for whom he had email addresses, some received only mail questionnaire, and the response rate was 52.3%. Those selected for web preference responded by either mail (12.4%) or web (35.8%), and those selected for email preference responded by mail (7.7%) and web (55.8%). If an email address was not provided, 56.3% responded if they received only mail contacts, compared with 28.6% among those with mail only and 50% total for those given the web preference (28.6% mail and 21.4% web). Glenn's slide presentation also discussed related issues dealing with response rates by combinations of contacts. His results indicated small differences by mode in respondent reports of satisfaction with extension services. Mail only seemed to have the lowest proportion very satisfied, compared with satisfied or less.Time spent discussing Glenn's slides.

Ginny Lesser continued with her state report, which included two mixed mode studies. The studies were testing cover letter details and testing the usefulness of a fifth contact using an additional mailing or postcard. Her state report also covered a summary of mixed-mode surveys done at Oregon State and a trace of response rates over time. For Study 1, Ginny discussed the effectiveness of using statements in the cover letter that pointed out the efficiency of completing the questionnaire on the Web. One version of the cover letter included an additional sentence encouraging respondents to use the internet to save money. The mail mode only treatment resulted in a 41% return. She then compared response rates by treatments (standard cover letter vs. a cover letter emphasizing the savings of completing a questionnaire by Web). For the Web followed by mail contact, 8.5% responded by Web using the standard letter and 10.9% responded by Web when a cover letter emphasizing the savings of completing a questionnaire by Web was used. When the sample group was offered the option of completing the survey by Web or mail, 3.9% responded by Web using the standard letter and 7.4% responded by Web when a cover letter emphasizing the savings of completing a questionnaire by Web was used.

Ginny's second study, of motorized boat owners, again compared combinations of mailing. The sample of motorized boat owners were first asked to complete the questionnaire by Web. Nonrespondents were then sent mail follow-ups. The fifth contact differed in the two treatment groups. In one group, the last mailing was a postcard and the other including another copy of the questionnaire along with another cover letter. The final response rates showed that the use of a postcard was not as effective as a final mailing that included a questionnaire with cover letter: 56% using a mailing including a questionnaire and cover letter vs. 49% using a postcard.

Overall, Ginny summarized all her mixed mode surveys (7) over the past years. She reviewed her 6 summary points. They were: if you want to drive people to complete a questionnaire by the web, the first contact should only provide a Web link to complete the survey and do not give an option; using a Web followed by mail approach, use 5 contacts, and make the 5th contact a letter and not a postcard; do not add additional instructions about accessing the web; use an additional line in the cover letter to comment on the cost savings of the web; and use a colorful front cover for the mail questionnaire.

Ginny's discussion was followed by a telephone conference call to Don's students, Benn lee Messer and Michelle Edwards on issues relating to item non-response of web and mail response in general public survey. Don gave a handout summarizing non-response. The concern: where do we go with mixed mode? Should we mix high and low response methods? A series of tables was past around comparing response rates by mode, including regression estimates showing sources of responses. The tables are to become part of a presentation at AAPOR meetings later this year, and members around the table offered Don and his students suggestions on how best to present the data. One suggestion was to focus more attention on the descriptive statistics, which tell an interesting story once the results are simplified.

The committee adjourned Thursday evening at 5pm and reconvened Friday morning.

Todd Rockwood talked about the difficulties of sampling special and sensitive populations with limited number of referrals. He briefly described respondent driven sampling (RDS) in his slides co-authored with Melissa Constantine. His research focuses on networks of special groups in the Minneapolis area. The argument of RDS is that respondents provide names of others in their network, and information about the networks can be used to obtain population parameter estimates. This is done by having initial members of a target population act as seeds to nominate others (up to 3), with both seeds and referred persons receiving incentives. In this approach, sampling weights are calculated using information about the respondent's network size. To get weights requires referral chains of at least 4 - 5 waves and an estimate of each respondent's network size. Todd described their protocol as they apply it to the Hmong populations of Minneapolis. Todd concluded by comparing some of the strengths and weaknesses of this approach. Discussion followed, with Ginny suggesting several publications by Thompson on closely related sampling procedures.

Steve Swinford reviewed his projects, including his paper at Midwest Soc Society in April 2010. Steve does a census of a school district on a given day, and asks about drug and alcohol use (an epidemiology model) over the last 30 days or over the past year. They end up with different estimates, with epidemiology being higher (30 days x months). Plan: the survey is implemented annually and they are planning to get rid of some of the questions, but are asking, which ones? They are trying to close the gap between measures. They do not have a panel but they collect data every year. He continues to work on this study. He also reviewed other studies underway. Steve is working on a survey of crime victimization; its statewide mail out 8 pages. He has money to do a letter and a survey in one wave, but may not have enough money to do a 2nd mail out. Ginny made a concrete suggestion: do a double sampling design, with a subsample of the non-respondents. That will give you a better estimate. Don: incentive? Steve: No, can't do. The study is modeled after studies in other states, including Minnesota. Steve will be on sabbatical next year, and he outlined his sabbatical agenda.

Fred Lorenz outlined his research on part-whole questions in which a general question either precedes or follows a series of related specific items. He drew from two data sources: four replications of the Oregon DOT surveys, which Ginny refereed to earlier, and a mail and telephone version of an Iowa study of satisfaction with local governments. The core of Fred's study was to look for evidence of response sets by conceiving of the specific items as manifestations or symptoms of an underlying latent variable. From this perspective, variance in each specific item is partitioned into common variance and error variance using confirmatory factor methods. Patterns in the systematic error variance may provide insight into how respondents answer questionnaires. Lorenz estimated a model in which error variance in the 2nd specific item was correlated with the 1st, the 3rd with the 2nd, etc. The result, replicated across samples, was to improve the fit of the model to a greater extent than you would expect by chance, as judged by a specially designed randomization test. The results suggest that respondents work their way through a questionnaire in a systematic fashion such that the response to one specific item shapes their response to the next immediate item. Discussion followed, with Don and Todd providing citations that could link this work back to earlier work on response sets.

Friday afternoon we returned to several themes initiated earlier Friday morning or Thursday, including a lengthy discussion of how best to frame subsequent research questions about mixed mode surveys. One theme that reoccurred repeatedly in discussions of mixed mode surveys has to do with data quality, and one measure of data quality is item non-response. Glenn provided two slides that demonstrated close parallels between mail and web surveys in counts of item non-response. In one case, about 30% of mail and 45% of web surveys had no item missing data. From among 21 possible items in one questionnaire, Glenn reported that very few returned questionnaires had more than 5 missing items. The largest quantities of missing data were with respect to open ended questions.

Glenn also has a handout on getting optimal answers to open ended questions. Glenn has a co-author who is skilled with qualitative data and thinks of open ended questions as narrative components that build on Sudman's application of Grices maxims. They will continue their work on this theme.

Ginny continued with her state report (started the previous day with a discussion of her two experiments). Ginny updated her work on response rates to a survey conducted by the OSU Survey Research Center since 1994. Overall, there has been a decline in response rates over time, from an average of about 70% in April 1994 to about 40% in Oct 2010. When looking more closely at the data, Ginny fit a spline to the data and recorded slow monthly declines between 2000 and 2005, a precipitous decline between 2005 and 2008, and a less dramatic decline between 2008 and 2010. She concluded that response rates are falling for both males and females; they are the lowest for males and for the younger age groups, and declining for all age groups.

Accomplishments

Publications listed in the publications section show the accomplishments of the group. The meeting ended with a discussion of future work, publications, and impacts of the research from the group.

Publications

2010 Publication List<br /> <br /> Callegro, Mario, Yang, Yongwei, Bhola, Dennison S., Dillman, Don A. and Chin, Tzu-Yun. 2009. Response Latency as an Indicator of Optimizing in Online Questionnaires. Survey Methodology Bulletin. N. 103:5-25. <br /> <br /> Constantine, M. L., Todd H. Rockwood, B. A. Shillo, J. W. Castellanos, SS. Foldes, & J. E. Saul (2009). The relationship between acculturation and knowledge of health harms and benefits associated with smoking in the Latino population of Minnesota. Addictive Behaviors, 34, 980 - 983.<br /> <br /> Constantine, M. L., Todd H. Rockwood, B. A. Schillo, N. Alesci, S. S. Foldes, T. Foldes, Y. Chhith, & J. E. Saul (2010). Exploring the relationship between acculturation and smoking behavior within four Southeast Asian communities of Minnesota. Nicotine & Tobacco Research, 12, 715 - 723.<br /> <br /> Davern, M. D. McAlpint, T. J. Beebe, J. Ziegenfuss, Todd Rockwook & K. T. Call (2010). Are lower response rates hazardous to our health survey? An analysis of three state telephone health surveys. Health Services Research, 45, 1324  1344<br /> <br /> Dillman, Don A., Ulf-Dietrich Reips and Uwe Matzat. 2010. Advice in Surveying the General Public Over the Internet. International Journal of Internet Science 5 (1): 1-4.<br /> <br /> Dillman, Don A. and Benjamin L. Messer, 2010. Chapter: 17: Mixed-Mode Survey, in Peter Marsden and James Wright (eds.) Handbook of Survey Methodology. Emerald Publishing Limited: Bingley, United Kingdom. Pp.551-574.<br /> <br /> Cui, Ming*, Jared A. Durtschi, M. Brent Donnellan, Frederick O. Lorenz & Rand D. Conger (2010). Intergenerational transmission of relationship aggression: A prospective longitudinal study of observed behavior. Journal of Family Psychology, 24, 688 - 697.<br /> <br /> Israel, Glenn D. (2010). Using web surveys to obtain responses from extension clients: A cautionary tale. Journal of Extension, 48, available at: http://www.joe.org/joe/2010august/a8.php.<br /> <br /> Israel, G. D. 2010. Effects of Answer Space Size on Responses to Open-ended Questions in Mail Surveys. Journal of official statistics, 26(2), 271-285.<br /> <br /> Israel, G. D. 2009. Obtaining Responses by Mail or Web: Response Rates and Data Consequences. JSM Proceedings, Survey Research Methods Section. 5940-5954. Available at: http://www.amstat.org/Sections/Srms/Proceedings/.<br /> <br /> Mahon-Haft, Taj and Don A. Dillman. 2010. Does Visual Appeal Matter? Effects of Web Survey Screen Design on Survey Quality in Survey Research Methods 4 (1): 43-59.<br /> <br /> Meier, A., Smith, M., and Usinger, J. (2010). Environmental Project Provides Work Experience for Rural Youth. Journal of Extension, 48(3). [Article No. 3IAW3] Article posted on-line June 2010. http://www.joe.org/joe/2010june/iw3.php. <br /> <br /> Messer, Benjamin L. and Don A. Dillman. 2010. Using Address Based-Sampling to Survey the General Public by Mail vs. Web plus Mail. Technical Report 10-13. Washington State University Social and Economic Sciences Research Center. Pullman.<br /> <br /> Morrison, Rebecca, Don A. Dillman and Leah Melani Christian. 2010. Questionnaire Guidelines for Establishment Surveys. Journal of Official Statistics. 26 (1): 43-85. <br /> <br /> Rockwood, Todd & M. Constantine (2009). Item and instrument development to assess sexual function and satisfaction in outcome research. International Urogynecology Journal, 20 supplement 1: S57 - 64.<br /> <br /> Smyth, J.D., Dillman, D.A., Christian, L.M., & ONeill, A. 2010. Using the Internet to survey small towns and communities: Limitations and possibilities in the early 21st century. American Behavioral Scientist. 53: 1423-1448. <br /> <br /> Toepoel, Vera and Don A. Dillman. 2010. Chapter 7. How Visual Design Affects the Interpretability of Survey Questions, in Das, Marcel, Peter Ester and Lars Kaczmirek (eds.), Social Research and the Internet: Advances in Applied Methods and New Research Strategies. Pp.165-190. <br /> <br /> Usinger, J. and Smith, M. (2010). Career Development in the Context of Self-Construction during Adolescents. Journal of Vocational Behavior, 76: 580-591. <br /> <br /> Wickrama, K. A. S,, R. D. Conger, F. F. Sujadi, & F. O. Lorenz. (2010). Linking early family adversity to young adult mental disorders. In. W. Avison, C . S. Aneshensel, S. Schieman & B. Wheaton (Eds.), Recent Advance in Stress Research: Essays in honor of Leonard I. Pearlin. New York: Springer. <br /> <br /> Wilcox, A. S., Giuliano, W. M., & Israel, G. D. 2010. Response Rate, Nonresponse Error, and Item Nonresponse Effects When Using Financial Incentives in Wildlife Questionnaire Surveys. Human dimensions of wildlife, 15(4), 288-295.<br />

Impact Statements

  1. Outreach: Most of the experiments reported on in these minutes arise from surveys conducted by WERA 1010 participants for university, state or community agencies. Examples include Glenn&lsquo;s evaluation of extension and his Your Florida Yard and Use Survey, Ginny&lsquo;s surveys for the ODOT, and Don&lsquo;s community surveys and evaluations of university services. Continued collaboration with these agencies motivates new experiments and provides empirically based advice for users of surveys.
Back to top

Date of Annual Report: 03/10/2012

Report Information

Annual Meeting Dates: 02/23/2012 - 02/24/2012
Period the Report Covers: 10/01/2011 - 09/01/2012

Participants

Brief Summary of Minutes

Minutes
WERA1010: Error Reduction in Rural and Agricultural Experiments
February 23-24, 2012

The 2012 annual meeting of WERA 1010 was convened by Chair Ginny Lesser at 8:15am on Thursday, February 23 at Tucson InnSuites.

Present were:
Virginia Lesser,Chair: Oregon) lesser@stat.orst.edu
Lou Swanson, Administrative advisor (Colorado State) Louis.Swanson@ColoState.edu

Gerard Kyle (Texas A & M) gkyle@tamu.edu
Billy McKim (Texas A & M) brmckim@tamu.edu
Rob Robertson (New Hampshire) robrobertson@unh.edu
Don Dillman (Washington) dillman@wsu.edu
Glenn Israel (Florida) gdisrael@ufl.edu
Fred Lorenz (Secretary: Iowa) folorenz@iastate.edu
Todd Rockwood (Minnesota) rockw001@umn.edu
Steve Swinford (Montana) swinford@montana.edu

Still interested but couldnt make it are:
Patricia Hipple (USDA) PHIPPLE@nifa.usda.gov
John Allen (Utah State) john.allen@usu.edu
Shorna Broussard Allred (New York (Cornell)) srb237@cornell.edu
Dave Peters (Iowa) dpeters@iastate.edu
Marilyn Smith (Nevada) smithm@unce.unr.edu
Fern Willits (Penn State) fkw@psu.edu
Angela Mertig (Tennessee: Middle Tennessee State) amertig@mtsu.edu
Nick Plance (University of Maryland) nplace@umd.edu
Wuyang Hu (University of Kentucky) wuyang.hu@uky.edu
Courtney Flint (University of Illinois (Champaign-Urbana)) cflint@uiuc.edu

Retired members:
Jim Christenson (AES Advisor: Arizona) jimc@ag.arizona.edu
John Saltiel (Montana) jsaltiel@.gmail.com

Deceased (2009):
Robie Sangster (BLS), 2009
Bob Mason (Oregon State), 2011

Opening details: Ginny opened the meeting. She noted that we are being charged for the conference room this year, which we will divide 10 ways ($27/person). We will pay Ginny, who will pay the bill and sign our registration receipt. Dinner Thursday evening for approximately 14 will be at Jim Christensons country club. We need to pay Don who will settle with Jim.

The agenda for the meeting was approved. In brief, we will review state reports and plan for future meetings.

Special request: Lou Swanson, Administrative Assistant, needs three volunteers to discuss the demographers WERA proposal. (Three from the committee volunteered: Glenn, Fred & Steve). Lou is Vice President for Engagement and Director of Extension at Colorado State University. He requested that we meet at another time to avoid a conflict he has in Colorado. We are looking into it moving next meeting to a week earlier, and in fact have moved our next meeting to February 14  15, 2013.

State reports.

Don Dillman began by passing around three articles that have been published and appear in the references. Don is now interested in using address based samples, and then convincing people to go to the internet to respond? This is the objective because telephone interviewing is not doing well, with coverage at about 70%. Don discussed different approaches of mail and internet. The strategy: mail is expensive and web is cheap for large samples, so push people to the internet. Also, the demographic groups who are willing to do internet surveys are different (younger; better educated) than those who use mail only, so increasing web responses may reduce non-response bias.

Don also talked about a new methodological study that is underway but not yet published. It involves Pennsylvania, Alabama and Washington. It is an attempt to extend the techniques effective for pushing respondents to the web at a state level in Washington to other states. Don discussed the design and results so far, noting that Alabama has the lowest response rate, probably due to low aggregate education levels. Treatments are mail only, mail + web, etc. The incentives are $4 plus a 2nd $2 that goes with going to the mail+web treatment after several contacts. Sending the 2nd incentive was important. Mail remains highly effective in Alabama and Pennsylvania, but the push towards the web is less effective than Don had hoped. This study also examines a new, stronger push to the web and its influence on the proportion of respondents using web. Discussion followed: one important point: incentives lower non-response error; its not about response rates; it is about non-response error.

Don currently has another study ready to go to the field. It includes Nebraska, It examines whether trust in the sponsor, which seemed to be responsible for lowering use of web in the PA vs. WA study, has a significant influence on web response. The design is to send questionnaires from Washington State University to respondents in Washington and Nebraska, and send questionnaires from Nebraska to Washington and Nebraska. Jolene Smyth at the University of Nebraska is a collaborator on this study. The hypothesis is that internet responses may suffer from lack of identification with the state university, as evidenced by an expected cross-over difference. Discussion followed about how best to conceive of incentives: are they payments, part of an economic exchange? One consistent finding: the law of diminishing returns is at work; more incentive gets higher returns, but in diminishing amounts.

A question was raised and discussed: Whats the effect of the length of the URL? Ginny noted that Oregon has a formula for naming URLs. Don predicted that mail surveys will likely be around for a long time due to computer literacy (illiteracy). Todd asked: Why push the web, since mail only gets a significantly higher response rate? Ginny: in part its expected; its about perception? It is only when N is greater than 1000 that it pays to do web-based surveys. Web is still much more expensive for smaller surveys. Don noted that he is identified with mail, so that he cannot credibly advocate just for mail. If he advocates web, it lends credibility to mail. Discussion also was directed toward item non-response in mail and web surveys. Todd recalled that item non-response is higher on web. American community study: 10% missing on web; 3% on mail. Ginny observed that mail is better than web, but as more are moving to web, they are getting better at them and so expenses are dropping. Glenn: web has advantage in that people volunteer more open ended comments on the web.

Don noted that in his various experiments web has a lower item non-response than does web, but not by much, so it is not a major consideration in deciding whether to push more people to the web vs. mail. He has edited a special issue of Survey Practice, which will appear in June. It includes four papers by members of this committee, and was developed out of a session he organized for the 2010 American Association for Public Opinion Research (AAPOR) annual conference.

Don turned next to data from the National Park Service in Idaho. All park populations. Analysis over 20 years indicates response rates of 75% with little evidence of decline. Tommy Brown showed declines but with more questions and more questions per page. He is finishing a paper with Bryan Rookey and others on response rate trends in mail surveys over a 20 year period, retesting some of the ideas developed by Brown with a quite different data set. It has been tentatively accepted, pending some revisions by Social Science Research.

Glenn Israel reported working on a new mixed mode survey for the 4th year, and he is still working on open ended questions. Glenn reported that a new 3x3 experiment is in the field that crosses amount and timing of incentives. Amounts are $0, $2 and $5; timing is incentive with the pre-letter, with the 1st questionnaire, and with the 2nd questionnaire. No results yet. The survey is about aquatic invasive species, and the respondents are freshwater boaters (under 20 feet, based on vessel registrations) and fishing license holders.

Glenn also reported on a series of mixed mode survey in which the motive was to find high response rates with minimal bias using alternative contacts. The 1st experiment was mail only, mail/web choice and web preference. Choice included both mail questionnaire and URL; web preference had only URL but choice on last contact.

Basic results: mail only 65%; mail/web choice 51% (mail) + 8% (web) ; and web preference 29% (web) + 23% (mail)

The 2nd study: provided both postal & email addresses: mail only; web preference; and E-mail. Glenn laid out the alternatives. The essential results: mail only 53% and email + mail was higher with email obtaining 54% and mail 8%. Due to small sample size, total response rates were not statistically different.

The 3rd study: had postal & email contact information with mail only, email preference, email complement and email only. The postal only and email only differences were compared. One comparison: the email only group had high rates of non-contact (14% and 17%) in the two treatments. Glenn displayed differences by contact and response modes. Email/Web only had a lower overall response rate than mail only and mixtures. Demographic profiles were not different among treatment groups having both postal and email addresses.

The 4th (2011) study data are in with essentially same set up of mixtures. There are problems with email addresses. Glenn showed response rates for groups with both postal and email addresses, where P = postal and E=email. The orders of contact are shown: all postal contacts (PPPP) = 67%; PEEP = 58%; P..P = 38%; EEEP = 54%; ...P=36%. Also those with only a postal address were (PPPP) 59% and and those with only an email address, EEEE = 39.8%. The responses were affected slightly be education; else little differences. Item nonresponse: web gets more complete surveys, esp. open ended. Demographics were more complete in mail but open ended were more extensive in web.

Glenn also reported on 2011 open ended questions. One item asked for an explanation (what the problem was, what was done with the information, and what the result was) and a second item asked how services could be improved. The experiment tested whether a verbal prompt its very important for us to learn how. . . increased the likelihood of a response and the number of words provided when there was a response. The importance cue didnt work for the explanation item but did for the improvement item, and there was an interaction effect between the improvement item and average number of words in open ended questions with the verbal cue having a large impact only with web responses.

Steve Swinford is on sabbatical this year. He reported on his First Year Experience survey, which is linked to other data sets on students at Montana State. This will be presented in Hawaii and Vancouver this coming year. The Montana Crime Victimization Study is a statewide study with 5,000 people randomly selected from the general population. The study had a 55% response rate and 10% dead/returned. The study included pre-postcard, 2 mailings, an 8 page questionnaire and no incentive. After adjusting, they had close to 60% effective response rate. Steve works with the Center for Health and Safety Culture, which included the Idaho Transportation Department, plus agencies in Oregon, Minnesota, Alaska and Ontario. He analyzed his data, including pre- and post-tests, in response to evaluations requested by the funding agency. Finally, he is working with 10 communities in Minnesota with interventions surrounding alcohol abuse problems (with non-monetary incentives). One interesting idea: at one school, all students have ipads, so it is doing a web-based survey, to be conducted on a given day during a given period. The rest of the communities are doing pen & paper. This work does not specifically deal with experiments, but Steve can make gender by grade comparisons.

Todd Rockwood reported that he has been doing variations on respondent driven sampling (variant on snowball sampling), on which NIH has underwritten research. Respondent driven sampling uses network theory to develop sampling weights. They have tried it with prostitutes in the Cities; also with Latinos, Native Americans, children with childhood diseases, and samples from HIV positive populations. It seems to work very well in representing populations. The sampling goal: develop long referral chains. They want to extend this to develop social base and cultural based sampling. Sometimes it works well; sometimes not. During the past year weve done a lot in the UK; its quicker at getting address corrections. Also, Todd and his group do work around translation; using focus groups, they have found that some immigrants try to appear more, and sometimes less, acculturated then they are. Their research is now moving away from focus groups into cognitive and ethnographic work, translating from five different languages. Regarding their research on health and sexual functioning among the elderly, they had a lot of non-response among 80 year olds regarding sexual functioning, especially the emotional aspects. Further, in responding to lists in questionnaires, they often respond to long lists of where responses are yes or no by marking only the occasional yes that applies, and then they skip the rest (they dont answer no). They seem to interpret lists as check all that apply.

Rob Robertson reported his active involvement with variety of stakeholders in applied social science studies. For example, NH project is actively engaged with two projects focused on the management of fish and wildlife resources in rural areas. These projects are a mail and web-based survey residents of four NH communities. The survey instruments are focused on residents attitudes towards a variety of bear management programs and policies. The second study is an evaluation of NHFG Volunteer Turkey Monitoring Program. This study is making use of a web-based evaluation tool. A third project is being initiated that will focus on the management and marketing of farmers markets in Strafford and Rockingham counties. This project will be evaluating the effectiveness of a web based survey instruments and software. A fourth study in the planning stage will collect data necessary for the development of the NH Route 1a/1b Corridor Plan. The principal investigator completed a similar project in 10 years ago in cooperation with Rockingham Regional Planning and NH Department of Transportation. This project will include the establishment and direction of a NH Route 1a/1b Corridor Advisory Committee and the collection of data from key stakeholder groups to include tourists visiting the corridor, residents of the corridor and management/policy makers responsible for the development of the Corridor Plan. This project will include the compare the effectiveness using a pen and paper survey versus the use of I Pad technology for the collection of data from visitors to corridor. on studies he has been doing, including a study on how to manage bears.


We adjourned about 4:30 and reconvened at 8:15 on Friday, February 24.

Don passed around the series of articles to be published in Survey Practice, an electronic journal of AAPOR, which will appear in June 2012.

Ginny Lesser continued the state reports by discussing two studies completed and two studies in the field. The 1st deals with a questionnaire program (like Survey Monkey) called LimeSurvey, which is free. It has excellent choices for types of survey questions and provides the ability to assign pin numbers. Data is also collected on your own server.

Ginny also reported doing a survey on hazardous materials carried on Oregon highways. It followed a restricted stratified cluster sample design; the frame is trucks entering Oregon weigh stations. Hazardous class was treated as strata and companies were clusters. Companies were selected and sent a questionnaire to identify the routes traveled on the selected trip. As a note of information, diamond shaped logos on trucks are indications of hazardous materials, higher numbers in the diamond indicate greater hazard (does not include radioactive hazards). They collected data over two 6 month intervals. The data are to be used to identify the types of materials transferred on specific highways so that Hazmat teams are prepared for any accidents of hazardous material.

Ginnys second study is in collaboration with the National Oceanic and Aerospace Administration (NOAA) on estimating (sampling) the fish industry. The tradition is to use intercept studies on the docks. This does not work well to estimate fish caught at night. A pilot study is currently underway using diaries to collect a panel of data over a year. This was done before in Australia, with over 90% retention over a year. A key to success is coordinating interviewers that develop a relationship with the interviewee to maintain a high response rate. This pilot study is currently in the field.

Ginny has two studies either in the field or going out. One is the ODOT needs study, which will be mail (N = 2738) and web&mail (N = 2738) with up to 5 contacts, and variations on the front cover. The 2nd study (with Steve) is on underage drinking; the mail (n = 900) and web&mail (n = 900), and up to 5 contacts. No incentive because it was too IRB complicated.

Ginny updated us on her ongoing study of ODOT response rates, to be presented at APPOR. She has monthly time series data collected since 1994 and since 2001, data on age, gender along with response rate, has been collectedi. She reported a piecewise regression that takes into account changes in protocol, including increasing the length of the questionnaire. There was also a change in the visual format of the questionnaire. She reported on the segmented time series. The interpretation is that prior to 2001, response rates declined 1.4% per year; then declined at a 7% rate between 2001 and 2003, immediately after increasing the questionnaire length; and then declined at 0.55% per year after 2003. Response rates are higher for females and by age, with older respondents having higher response rates.

Fred Lorenz reported on general-specific questions for 5 waves of ODOT data and an Iowa community mail questionnaire and telephone interview. He noted that most research on general-specific questions, including research done by our group (Willits and Saltiel, Rural Sociology, 1995; Willits & Ke, Public Opinion Quarterly, 1995), has focused on the relationship between general question and specific items. Fred sought to replicate previous results but also examine the relationship of the specific items to each other. He found that their study has the same patterns of responses as previous studies. In addition, he also found evidence of 1st order serial correlations between adjacent items; that is, answers to the 2nd item on the list of specific items were conditioned by the 1st, the 3rd by the 2nd, etc. Using structural equations and a permutation test developed specifically for this study, they found evidence that correlating the residuals of adjacent items consistently improved the fit of the model to the data to a greater degree than the reduction you would expect when freeing any random set of residuals. The procedure and SAS program for doing the permutation test will appear in Structural Equation Modeling.

Gerard Kyle is new to our research group and reported on his activities that relate to our research. The National Geographic sponsors trips and they evaluate the trips using on-site questionnaires. They have response rates of about 65% based on a sample of 300. The past trip was in the Saguaro National Parks near Tucson, and the next will be at Rocky Mountain. A 2nd study is with Texas lakes, where boat ramp users and shoreline residents are interviewed. Its a web/mail combination. A 3rd study is with Texas Parks & Wildlife Department, with license lists; again a mail/web combination. Also about 5% buy license by mail, which gives more information and provides better information for conducting web surveys. The last time (2009) they had low and declining response rates. Also, Gerard is working with a colleague on studies on the Channel Islands off LA, to replicate an earlier study in Hitchenbrook, Australia. The study has to do with values, where respondents are asked to situate values spatially on a map relative to each other, and to assign weights to the values. This is very difficult data to assemble and he is uncomfortable with the design. Question: Is it valid? Todd: some literature. Don: suggest cognitive interviews.

Administrative details: Before continuing state reports after lunch on Friday, we discussed editing the draft of the minutes for the annual report, which is due with list of publications about April 1, 2012. We also agreed to continue with another submission, working with advisor Lew Swanson.

Billy McKim is also new to the group, and he reported on his research at Texas A & M. His work is split between extension and external research, including research for state agencies. First, he did a 2 year evaluation of disaster case management for homeland security in the wake of hurricane Ike (2008). He did a client survey, which FEMA has not used before. He looked at 34 counties, including Houston, but also very rural areas. He worked with regional council of governments, plus faith-based organizations. He had a 3-phase approach proposed and implemented. Data were collected from multiple sources and linked by case ID number. Privacy Act restrictions required that respondents couldnt be identified by name or street address. Respondents were sent questionnaires by case ID number so that when he assembled the data no one could match ID numbers with names. The questionnaires were designed to be read by people with 3  5th grade reading skills. The disaster case management project served 20,308 individuals. They took a stratified random sample of 7000 invited respondents, which had a 21% frame error, thus reducing the effective sample size to 5500. The US Postal Service refused to deliver to several zip codes for unknown reasons. There were 2139 responses for a response rate of 39%. Their process was postcard pre-notice; packet, reminder, replacement packet, and then another replacement packet. He showed the homeland security template cover letter, as approved by the state and released by each NGO. He showed the approved questionnaire. One was the client questionnaire (version 10 as approved), and many questions dealt with satisfaction with FEMA and the state. Billy reported differences in satisfaction by type of organization, and he showed other data.

Billy also studies extension and ag teachers in 37 states, all by web. He did an experiment comparing radio buttons verse sliders. There were comments: Don noted that Randall Thomas compared sliders to other scales, and they may not worth it. Billy also did another study of teachers in 5 states; only 1 wanted a web; all else wanted a packet. He also had an H1N1 mixed mode survey. Sent 10,000 mail Qs and used Qualtrics panel for web and RDD for telephone questionnaire. They wanted 10,000 because the expected a 10% response rate. They ended up with 19.5% response rate. The study was about avoiding H1N1 and how did you manage it. What messages did you get and where from?

Miscellaneous discussion followed after Billys report. We ended on an especially encouraging note: collaboration seems possible, especially between Gerard, Billy and Glenn on experiments dealing with open ended questions and common characteristics of experiment station and extension.

Outreach: Most of our discussion focused on fundamental research issues. However, Glenn Israel has been active in extension and will incorporate WERA  1010 results into his extension reports. Glenn gave a seminar on mixed-mode methods for five participants, including several from the Survey Research Center of the Bureau of Economic and Business Research, University of Florida.

Our next meeting is scheduled for February 14th and 15th, 2013 at the Best Western Tucson Inn Suites on North Oracle Road. Meeting adjourned at 3:30pm.

Minutes respectfully submitted March 6, 2012


Frederick O. Lorenz, Secretary, WERA1010



Publications released during calendar 2011

Conger, R. D., Cui, M., & Lorenz, F. O. (2011). Economic conditions in family of origin and offsprings romantic relationships in emerging adulthood (pp. 101-122). In F. D. Fincham & M. Cui (Eds.), Romantic relationships in emerging adulthood. New York: Cambridge University Press.

Cui, M., Wickrama, K. A. S., Lorenz, F. O., & Conger, R. D. (2011). Linking parental divorce and marital discord to the timing of young adults marriage and cohabitation (pp. 123-141). In F. D. Fincham & M. Cui (Eds.), Romantic relationships in emerging adulthood. New York: Cambridge University Press.

Durtschi, J. A., Fincham, F. D., Cui, M., Lorenz, F. O., & Conger, R. D. (2011). Dyadic processes in early marriage: Attributions, behavior, and marital quality. Family Relations, 60, 421  434.

Israel, G. D. 2011. Strategies for Obtaining Survey Responses from Extension Clients: Exploring the Role of E-mail Requests. Journal of extension, 49(3), available at: http://www.joe.org/joe/2011june/a7.php.

Lesser, V.M., D.K. Yang, and L. Newton. 2011. Assessing Opinions Based on a Mail and a Mixed-Mode Survey. Human Dimensions of Wildlife 16(3).

McKim, B. R., & Saucier, P. R. (2011). Agricultural mechanics laboratory management professional development needs of Wyoming secondary agriculture teachers. Journal of Agricultural Education, 52(3), 75-86. doi: 10.5032/jae.2011.03075

McKim, B. R., Rutherford, T. A., Torres, R. M., & Murphy, T. H. (2011). Organizational climate of the American Association for Agricultural Education. Journal of Agricultural Education, 52(3), 87-99. doi: 10.5032/jae.2011.03087

McKim, B. R., & Torres, R. M. (2011). Perceptions of Missouri 4-H youth development personnel regarding interorganizational cooperative behavior. Journal of Extension, 49(4). Available at http://www.joe.org/joe/2011august/a9.php

McKim, B. R., & Saucier, P. R. (2011). An Excel-based mean weighted discrepancy score calculator. Journal of Extension, 49(2). Available at http://www.joe.org/joe/2011april/tt8.php

Messer, Benjamin, L. and Don A. Dillman, 2011. "Chapter 14. Comparing Urban and Rural Quality of Life in Washington," in Marans, Robert M. and Robert Stimson, Urban Quality of Life: Implications for Policy, Planning and Research. Springer Books.

Messer, Benjamin L. and Don A. Dillman. 2011. Surveying the General Public Over the Internet Using Address-Based Sampling and Mail Contact Procedures. Public Opinion Quarterly 75 (3): 429-457.

Millar, Morgan M. and Don A. Dillman. 2011. Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly 75 (2): 249-269.
Munoz-Hernandez, B. V.M. Lesser, and Ruben Smith. 2011. Applying Multiple Imputation with Geostatistic Models to Account for Item Nonresponse in Environmental Data. Journal of Modern Applied Statistical Methods 9(2).
Saucier, P. R., & McKim, B. R. (2011). Assessing the learning needs of student teachers in Texas regarding management of the agricultural mechanics laboratory: Implications for the professional development of early career teachers in agricultural education. Journal of Agricultural Education, 52(4), 24-43. doi: 10.5032/jae.2011.04024

Surjadi, F. F., Lorenz, F. O., Wickrama, K. A. S. & Conger, R. D. (2011). Parental support, partner support, and the trajectories of mastery from adolescence to early adulthood. Journal of Adolescence, 34, 619-628. PMC3043113

Toepoel, Vera and Don A Dillman. 2011. Words, Numbers and Visual Heuristics in Web Surveys: Is there a Hierarchy of Importance? Social Science Computer Review 29 (2): 193-207

Yongwei Yang, Mario Callegaro, Dennison S. Bhola, Don A. Dillman. 2011. Comparing IVR and Web administration in structured interviews utilizing rating scales: Exploring the role of motivation as a moderator to mode effects. International Journal of Social Research Methodology 14 (1): 1-15

Accomplishments

Publications

Impact Statements

Back to top
Log Out ?

Are you sure you want to log out?

Press No if you want to continue work. Press Yes to logout current user.

Report a Bug
Report a Bug

Describe your bug clearly, including the steps you used to create it.