There’s already been a ton of controversy around the Don’t Ask Don’t Tell survey. You think you know how bad it is?
You have no idea.
Pending continued interest/a non-dead horse, we’ll be breaking this shit down for you next week using our in-house Statistical Expert (by default, Intern Hot Laura, ’cause uh, she’s a Soc Major and has some textbooks handy) and Our In-House Investigative Journalist (by default, recent Journalism Graduate Sarah because she’s not afraid of the telephone) …. but we really can’t hold out that long to at least begin digging into the worst Survey we’ve ever seen.
Yes, our personal background in Survey and Research Methodology is relatively shoddy. My U-Mich Statistics 350 course convened at 10 AM and my teacher’s voice grated my soul, therefore I made the executive decision to never attend class and take it pass/fail (thus securing my stellar GPA). Meanwhile in Missouri, Sarah’s teacher stopped holding her Statistics Class after three weeks because he declared himself unfit to teach it. Everyone got an “A.” Oh, the midwest.
However. Had we kept up with our coursework and perhaps been asked to submit a survey of our own using the principles learned in Sociology and Statistics, we have a 95% confidence interval that we would’ve come up with something WAY BETTER THAN THIS.
Furthermore, we’ve got a shit-ton of probability density that this DADT survey would’ve earned the U.S. Government an F. Alexander Nicholson, executive director of Servicemembers United, agrees:
“The survey isn’t even just slightly bad. It’s far more skewed than we even expected it to be, given the working group’s commitment to staying neutral.”
The Pentagon has responded to the accusations of bias by saying that those things are redic, because I mean c’mon, they hired an outside firm. But um… far be it from us to ever defend a giant corporation with massive influence over government policy but the outside firm they hired, Westat, appears more or less impenetrable. Founded in 1963, the Maryland-based employee-owned company has conducted thousands of surveys for the government and other bodies with relatively stalwart methodology. Its only lobbying dollars have gone towards, predictably, funding for a survey on Children’s Health, which doesn’t scream SKETCHY to anyone.
In fact, Westat has historically been one step ahead of the government — like in 1999, when the White House National Drug Control Policy solicited a $42.7 million study about anti-drug campaigns and education from Westat. The results of the November 1999 –> June 2004 study were that anti-drug campaigns weren’t working. As explained in Slate:
Five years and $43 million to show that a billion-dollar ad campaign doesn’t work? That’s bad. But perhaps worse, and as yet unreported, NIDA and the White House drug office sat on the Westat report for a year and a half beginning in early 2005—while spending $220 million on the anti-marijuana ads in fiscal years 2005 and 2006.
If Westat is secretly populated by biased conservatives, they’ve gone remarkably unchecked for quite some time.
Regardless, someone f*cked up big time, and here’s one example of exactly how they did that.
DADT Survey: All About Leading Questions & Response Bias
[click to enlarge]
As this handy online crash-course in AP Stats explains, this is “Response Bias” because it is a “Leading Question.”
Leading questions. The wording of the question may be loaded in some way to unduly favor one response over another. For example, a satisfaction survey may ask the respondent to indicate where she is satisfied, dissatisfied, or very dissatisfied. By giving the respondent one response option to express satisfaction and two response options to express dissatisfaction, this survey question is biased toward getting a dissatisfied response.
By giving the respondent four options for dissatisfaction/discomfort, one option for neutrality, zero options for comfort, one option for who-knows-what and one for “whatever,” this survey question is biased toward getting a dissatisfied response.
Language, people, language! This question tells the survey-taker how to feel about living near/with service-members by labeling it a “situation” (defined as a “condition, case or plight”) which needs to be “handled.” “Living with a homo” is officially qualified before the question is even asked — a question that, perhaps, HAS NOTHING TO DO WITH ANYTHING EVER.
And again, we have three options for dissatisfaction/discomfort, one for neutrality, one option for who-knows-what and one for “whatever.” So this question is SUPER biased toward getting a dissatisfied response.
This one’s especially special for the giant gulf of responses between “varying levels of discomfort” and “soooooooo comfortable that I actually want to be friends.” Let’s not even touch the phrase “like any other neighbors” which is a subjective qualification (e.g., I hate other people and therefore ignore my neighbors, so for ME that answer would be a negative response) and just note that there’s no neutral answer here. Oh also, from that AP Stats crash-course again, about response bias?
Social desirability. Most people like to present themselves in a favorable light, so they will be reluctant to admit to unsavory attitudes or illegal activities in a survey, particularly if survey results are not confidential. Instead, their responses may be biased toward what they believe is socially desirable.
In order to balance out the responses, we have re-written the question. Please note that responses are listed in order from very comfortable to very uncomfortable, ending with the apathy and the mysterious “something else” (which, at least in my case, would be “inquire about possibilities for an erotic third,” but for other soldiers might be “continue sexually harassing my female co-servicemembers, just like always”), just like we would’ve learned to do if we’d gone to class.