10 Questions
you should ask about poll results.
Polls provide a good source of information about public opinion, and can be valuable tools for a basis for accurate, informative news stories. However, it is important that you are happy that the poll resu lts are accurate before they are published or taken seriously. To help aid distinguishing properly conducted scientific polls, AIMRO have put together 10 questions to ask before reporting any results. This publication is designed to help you to judge wha t poll results you should take seriously.
1. Who did the poll?
Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted poll will avoid any errors.
Check if the company is a member of AIMRO or ESOMAR , and is a legitimate market research organisation that conducts research professionally outside of published polls.
2. Why are polls conducted ?
Many organizations want to learn what the public thinks. Those organizations include the government itself, the political parties, companies, non-profit and charitable organisations, and the news media. In a democracy, knowing public opinion helps parties to choose an agenda to campaign on, and gives government information on whether their policies are approved by the electorate.
Regardless of who commissions the poll, all polls need to be done well, otherwise they will give a distorted view of opinion (for example, if news organizations only interview people who visit their website, they are missing a large component of the public).
Finally, the public itself should care about national opinion. Since this information is being used by decision-makers, well-done opinion polls that are made public give the people access to the same type of information that governments and politicians use. Polls provide information that may affect the public. Well-done and properly reported-on opinion polls democratize information.
3 . How were people chosen for the poll ?
One of the key reasons that some polls reflect public opinion accurately and other polls are a poor representation of opinion is how people were chosen to be interviewed.
In scientific polls, the pollster uses a specific statistical method for picking respondents.
In unscientific polls, the person either picks himself to participate, by phone in or opt in; or people are called or stopped on the street without any statistical method or safeguards in place to ensure that the people picked for the poll represent the population in question.
It’s no good just ringing 500 or a 1000 people from your address book for a poll, as this will only be representative of your address book, not everyone in the country.
To make sure a poll reaches an accurate sample of people, you need to ensure that everyone you want to represent has an equal chance of being selected. Even then you also need to make sure the final sample spoken to represents your target.
Reputable polling companies use a combination of random/selective sampling and quotas to obtain accurate polls.
4. How many people were interviewed for the survey?
Bigger isn’t necessarily better.
Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal.
However a common trap to avoid is that this means "more is automatically better." While it is absolutely true that the more people interviewed in a scientific survey the smaller the sampling error, however a poorly conducted poll with very large sample can give you a very wrong result. Other factors may in this case be more important in judging the quality of a survey, than sample size.
AIMRO advises that for a properly conducted scientific national poll, a minimum sample of 1000 interviews is required and 500 interviews for a published local area poll.
5. How can a poll of 1000 people be representative of all the electorate?
As long as a sample is taken randomly and quotas used to ensure that it is representative of all adults, the size of the universe doesn’t matter; rather the margin of error is based on the size of the sample.
To give an example of this, if you make a cake with lots of different ingredients, as long as it is mixed properly, you do not need to eat the whole cake to see what it tastes like; just a small slice will tell you within a margin of error what the rest of the cake will taste like.
This is also well illustrated in a quote made by a famous US market researcher which reads, “If you don’t believe in random sampling, next time you are in for a blood test, ask the doctor to take it all.”
6. What is the possible error for the poll results?
A properly conducted poll of 1000 interviews provides a sample error of just + or -3% at 95% confidence level. If we increase the sample size the sample error only falls very slowly after this, for example a sample of 2000 interviews has a margin of error of around + or – 2%.
Local or regional polls can be conducted on a slightly smaller sample size of 500 interviews. However this does increase the possible error to + or – 4.5%.
Unscientific polls conducted without random selection and proper quotas set on who is interviewed, will have a much higher error, and may produce very misleading findings.
7. How were the interviews conducted?
Firstly it is important to make sure that the poll was conducted outbound by a company, rather than a phone in or self completion poll. In this case the polling company decides whether a person should be interviewed or not.
These types of polls are normally conducted face to face, by phone or online. All methods are valid means of conducting polls, as long as they have followed the scientific approach that meets all the previous points made – such as random selection, ensuring everyone being represented by the poll can be selected, and setting quotas to ensure the poll is representative.
8. What area (national or region/constituency) were the people chosen from?
Because polls aim to represent certain audiences, it is also important for to know who the poll is trying to represent.
For example, you need to know if a sample was drawn from among all adults, or just from those in one constituency or in one city.
A survey of people in Dublin can only reflect the opinions of people in Dublin – not of all adults.
9. When was the poll done?
Events have a dramatic impact on poll results. Your interpretation of a poll should depend on when it was conducted relative to key events. Even the freshest poll results can be overtaken by events.
Poll results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion.
10. What questions were asked?
Reputable polling companies always ensure that questions are phrased that do not bias the result one way or another, and allow the voter to agree or disagree with the topic.
Perhaps the best test of any poll question is your reaction to it. On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question?