By Emre Erdoğan
Istanbul Bilgi University
My career started as an interviewer when I was a senior student in the prestigious political sciences department of Boğaziçi University. Professor Yılmaz Esmer, a prominent scholar in the field of public opinion polling, member of the Executive Committee of the World Values Survey Association and longtime friend of Ronald Inglehart, allowed me to attend his seminar course with a small prerequisite: joining a team of interviewers to conduct a public opinion survey about religious attitudes of citizens in Konya, the most religious city of Turkey. My short adventure in Konya as an interviewer — I conducted 24 interviews in three days — was sufficient to convince me to spend the rest of my life asking questions to people I would never meet again. Similar to Newton, I felt myself “playing on the sea shore, and diverting myself now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me”; with a small difference: My shells were opinions of individuals. Every interview I completed was a discovery trip to unknown seas of public opinion, and both commonalities and exceptions were equally exciting for me. In each interview, putting a brick and building a concrete palace in the Ultima Thule was just a matter of time for me.
It has been about 20 years after my first interview with a very religious lady in Konya, and my desire to build my palace doesn’t seem possible anymore. During my career, I did not only act as a practitioner of public opinion polling, I had the opportunity to teach a series of social statistics and research method courses in the prominent universities of Turkey at undergraduate and graduate levels. This allowed me to allocate a significant portion of my time for tutoring students, conducting academic works and publishing about public opinion and foreign policy, political participation, social capital and similar themes in the field of political science.
Both my practice in the field and what I learned from my rapidly transformed discipline clearly showed me that building a palace by using sand bricks was impossible. I personally observed and theoretically confirmed that citizens’ opinion is not more bubbles flowing in the air, and what we could grasp through field research is only a shadow of what they have in their mind. Moreover, in the majority of cases, we were in charge of building opinions by asking questions. Public opinion polls and other field surveys are good at collecting information that researchers want to collect; but they cannot put a light into the darkness of uncertainty.
When I became familiar with works of Cannell, Tourengau, Kahnemann, Tversky, Lodge, Taber, and other scholars; I realized that answers obtained through survey research are constructed within milliseconds and they are highly distorted in every possible way, by innumerable factors from social pressures to question wording. And, I also observed that meaningless discussion about sampling error — if your sample is not probabilistic, you cannot talk about margin of error — was much more salient for those are interested in conducting surveys. Other types of error, such as reporting, framing or nonresponse are generally undermined by both academicians and practitioners. Almost every academic piece allocates a specific paragraph to discuss margin of error of the sampling framework; but discussions about how topology of the questionnaire affected responses of participants is hard to come by.
My personal experience showed that other errors are much more severe than weirdly calculated margin of error, and it is almost impossible to prepare an error-free questionnaire. Each questionnaire is one of innumerable possible alternative questionnaires, and each of them are equally wrong. For a well-trained practitioner, the only solution is to minimize the number of errors and prevent obvious errors, as documented by Fowler and other scholars.
This complexity of response process attracted me to read everything I can in the field of cognitive psychology, cognitive neuroscience and political psychology; and this uneasy challenge didn’t change my pessimism about the possibility to grasp “real” opinions of citizens — because they don’t exist — but I’m convinced that collaboration between these fields can help us to reduce our error.
The newly emerging field of political psychology, attracting the brightest minds in the academy and becoming more and more interesting for other fields of social sciences, owes its success to enriching theoretical discussions about political animals called human beings with very well designed empirical works. Experiments, in the laboratory or in the field, are so elegant to attract jealousy of evolutionary biologists; instruments are so well designed that the most orthodox methodologists cannot find any mistake. Findings published in the most prestigious journals such as Nature of Science attract thousands of page views and they are the most popular memes in the cyberspace. But, the discipline has a weak point: empirical works are generally based on data collected with questionnaires, and these questionnaires are not without error. I know that the community of political psychology allocates significant resources to improve measurement quality in empirical works.
However I believe survey errors are relatively undermined in the field and requires more attention. For young scholars, improving their questionnaire will be as equally valuable as running complex statistical models. As W. James said “A chain is no stronger than its weakest link”, and our major data collection instruments, our questionnaires, are our weakest link.
http://us6.campaign-archive1.com/?u=9e5cf1de544f33a41b35e4907&id=5bdef2c13c&e=42aa81b970