Resource Library

Ask the Survey Doctor: Five Deadly Sins of Survey Research Firms in Developing Countries

Insights | Series II | No. 7 | November 2014

Q. We have to prove our programs perform on a tight evaluation budget.  We also have USAID Forward breathing down our necks to use local contractors.  Aren’t the local survey research firms in developing countries able to do the job – and cheaply?  Why bother with costly international consultants?

A. We’ll begin to answer with a story.  A few years ago, a development consultancy working on a project proposal in Afghanistan told us, “We don’t need an international pollster to evaluate this one.  We found a local outfit with good references and that will help us comply with USAID Forward.”

After some nudging, they told us the local firm’s name – and got a shock. “Did you know they just lost a NATO contract there because of unreliable data?” we told the startled proposal leader.  Better safe than sorry, he decided – and brought us in to do quality control. 

This episode illustrates a big risk facing anyone aiming to do survey research in the developing world: bad quality control.  But that is only one of the deadly sins local research partners in developing countries often commit, putting evaluations, assessments, and market studies at grave risk.  Others involve poor methodology, weak capacity, inexperience with development issues, an academic approach, and shallow analysis.  Whether the result of corner-cutting or simple ignorance, these issues can turn an evaluation survey into an expensive flop. 

Non-specialists in polling – as most development team leaders, marketers, and security experts are –don’t know what they don’t know about survey research or how to spot problems with local firms. Steering you away from these pitfalls – and strengthening local capacity in the process – is why international consultants still are needed in emerging market survey research.

Here are five of the leading dangers you otherwise face.

  1. Lack of methDoctor Imageodological rigor.  

Weak research methods often result in unrepresentative and unreliable samples. 

For instance, an EU country invested in a big poll of residents of Pakistan’s Federally Administered Tribal Agencies (FATA) by a local NGO.  The result was a sample that was 75% men – since they are easier to speak to in the conservative Muslim region.  Moreover, most interviews were not done in the FATA at all, but at a bus stop in adjacent Peshawar, so there was no assurance of a random geographic, age, or social mix.  In short, it wasn’t really a poll at all – just qualitative research with numbers. 

Soon after, to do the first scientific poll in the FATA region, we trained the same NGO’s staff in random sampling techniques. We worked with professional Pakistani pollsters to draw up a sampling plan in the agencies while relying on the local knowledge of the interviewers to avoid risky hot spots.  The result: a sample that was 50% female and similar to other known demographic data.

Often the errors of local researchers are less obvious, but the effects are no less egregious.  Thus, an NGO supporting democratic political development commissioned a series of polls in Palestine to gauge political opinion.  Unfortunately, the organization they chose did telephone polls – when barely half the population had a phone!  So the other, poorer half of Palestinian opinion was left out, skewing the results so badly as to make them meaningless.  (To avoid this, when we poll in Palestine, we made sure the interviewing is face-to-face.)

 

  1. Inadequate research capacity

Research organizations in many of the poorest developing countries aren’t up to snuff – if they exist.

When we worked on the Afghanistan’s first national political poll – setting up the Asia Foundation Survey in 2003 – we found the local polling group had 15 male interviewers – and one female!  Far too few men, let alone women, for a national sample.  (And we’ll never forget the look on the local exec’s face when we told him they needed equal numbers of both sexes.  “Do we have to?”  he complained.)

Of course, they had to get many more interviewers, and of both sexes.  So we made them draw up an interviewer recruitment and training plan, to bring candidates to Kabul. Then another roadblock appeared: women from the provinces weren’t allowed by local custom to travel to the capital for training.  So in Kabul, we set up a train-the-trainer program for women who would go out and work with provincial women to prepare them for their roles.  The result: a poll heard round the world. 

In Liberia in 2006, right after the election of Pres Ellen Johnson-Sirleaf in the wake of 25 years of turmoil, we couldn’t find an organization that did focus groups, which USAID had commissioned us to do.  However, we were able to track down individuals who had done them five years earlier for a conflict resolution NGO.  After suitable retraining in recruitment and moderation techniques, they ran successful groups in Monrovia and Buchanan.

 

  1.  Quality control problems

In many survey problems with local suppliers, the devil is in the data – but you have to know how to spot it. 

Once, preliminary results of a poll in Israel showed all the Arabs were voting for the Likud (right-wing) candidate, not the man from Labor, the pro-peace-process party Israel’s Arab minority normally supported.  This seemed odd – until we thought of something.  On the phone to Tel Aviv we asked: “Were the candidates numbered in the same order on the Hebrew and Arabic questionnaires?” Our embarrassed friends saw the numbers were reversed in the two language versions.  Since Arabs make up 10% of Israel’s electorate, this would have made a huge difference in the results in a close election. 

Other times you need to know when to put your foot down.  Working in Northern Mozambique on an entrepreneurship project, our field firm doing in-depth interviews sent interviews with men only – claiming they couldn’t find any businesswomen. However, when they learned they’d get only 50% of their fee if they couldn’t, they tried again and, amazingly, found outspoken women.  When we were analyzing focus group transcripts from Palestine, we found a lot of missing data – whole questions we’d heard watching the group but omitted from transcripts.  Here too we made it clear they had to be fixed.  If the local pollsters are working for us, we can keep them on their toes.

 

  1. Inexperience in development and democracy work

Some developing countries – particularly middle-income ones – have market research firms who are methodologically competent, but haven’t worked much on development or democracy issues. Market research firms follow the money.   This is reflected in their sampling, questionnaire, and methodologies, which are focused on the affluent urban populations which form the high-end market in the developing world.  At best, they tend to have data from one country only – where they are based.

When we were working on a voter education poll in South Africa, we learned the local market researchers we were working with normally ignored the burgeoning informal shack settlements outside the cities.  These settlements were dangerous and unmapped, they said – but the shack-dwellers were also half the urban black population of the province!  So they were included.

In Mali, we were working on a national poll with a marketing firm that worked in the towns.  To reach rural people, the questionnaire had to be translated into local language – but this was not written.  We had to develop a system of transliteration so that the questions could be standardized for every respondent.

 

  1. Lack of strategic or comparative analysis

Many research outfits in emerging markets are either academics-turned-pollsters or local NGOs.  They may know how to take a decent random sample and get interviewers where they need to be, but they often don’t know how to do analysis much beyond the topline (frequencies) and a few demographics.  Their experience, and data, are limited to the countries where they are based.  

In Poland, for example, we worked with local sociologists who had set up their own polling firm, to track public views of the rollout of the new health insurance system.  (It was even more unpopular than Obamacare at that point!)  But they had never done communications work.  So not only did they not know how to do exploratory conceptual work or message pre-testing, but they were sure it could not be done.  (We did it, and showed how to fix the broken system in the public eye.)

In other cases, too many to count, we’ve seen local firm reports that just list the frequencies, gender, age, and one or two other demographic variables.  Regression modelling, factor analysis, correlations, comparisons to other countries, or even more sophisticated cross-tabulations are missing.  So as are analyses of key driver variables and actionable conclusions on the program and how it can be improved.  Researchers doing this kind of polling should be hauled up for professional misconduct – but it takes a professional to see it.

To sum up – in the developing world, the quality of survey research varies radically between organizations and countries.  Relying solely on cheap local providers can result in very costly errors and inaccurate survey results.  If polling in emerging markets is unknown territory for you, make sure you have an experienced guide! 

Register to Download

Tell us a little about you and we’ll send you the report.

×

Newsletter Sign-Up

Sign up for our free newsletter!

×