I spent a career trying to figure out what was important to customers. I dealt with engineers who thought that their fully digital widget would sell itself. I dealt with sales people who said their customers were clamoring for the hyperbole-infused widget we didn’t make. And I dealt with customers. Invariably, the first thing out of the customers’ mouths was “What’s it cost?”
Of course, not all engineers or sales people fit neatly into those categories. And to my mind, the ones who didn’t fit were the ones who were most successful. Easiest to work with, too. They didn’t let their preconceived notions keep them from getting and using valuable information.
Where am I going with this, you ask?
Well, not long ago I got a Jordan Visioning Survey questionnaire with a city newsletter. It took me all of two sentences to see that the questions were chosen and written to guide responses. If the survey was prepared by a professional market research firm, they clearly were told what City Hall wanted to hear.
If the survey was done in-house, it was clearly done by people who didn’t understand that cognitive assumptions affect research outcomes. (A less charitable thought might be that they knew what result they wanted, and set up the survey to get it.)
Those of us who were awake in Mr. Kraupa’s high school general science class learned that the scientific method involves formulating a theory, and testing it with skepticism. Those of us who read the textbook for Dr. Mueller’s psych class learned that it’s possible — easy, actually — to knowingly or unknowingly influence test results based on a number of factors including audience selection, question selection, question placement, and use of leading words.
From the city survey: “Desired roundabout at the intersection... “ is an example of using leading words. The word “desired” will influence respondents to respond favorably to the item.
Placement of the roundabout question as the first item of the survey will assure that fresh eyes see the item — not necessarily a bad thing, but still a means of skewing the results.
The second choice in the first section is “Desired grade separation/interchange at... .”
Again, use of the word “desired” is leading. And how many respondents understand what a grade separation interchange is?
Perhaps the most telling thing about these first items is what’s not there. No other choices are available. If the respondent feels the items are most important, he or she must choose the pre-selected options. Some respondents may choose to write in other options.
In one section, the survey asks about what type of projects the city should consider giving TIF or tax abatements to. If this survey were not skewed, there would be a “none” option in this section. Instead, the respondent is again required to write in his or her response if he or she thinks the city needs taxpayers, not tax takers.
In another section, no mention is made of how options will be funded. The supposition is likely that all options will be tax-funded. No mention is made of possibly funding via user fees.
One last thing. Nowhere in this survey do we see anything about the cost of or funding sources for anything. Surely, where the money will come from is an important part of planning for five or 10 years into the future.
Which box do you suppose people would check if they were given the options: 🗆 Keep taxes the same, 🗆 increase taxes 5%, 🗆 lower taxes, 🗆 other?
What do you think people would choose if they were given the options: 🗆 Reduce per capita debt, 🗆 increase per capita debt by 10%, 🗆 keep per capita debt the same?
“The answers you get from literature depend on the questions you pose.”
- Margaret Atwood
P.S. When did “visioning” become a word? In my opinion, it’s just another example of bureaubabble.