Exit Polls 2004, what went wrong -- Part II

BY B S Chandrasekhar| IN Books | 23/05/2004
There were at least four areas where the pollsters appear to be deliberately transgressing the basic principles of research methods.

B S Chandrasekhar 

In Election 2004 pollsters have failed to see the change of mood in Gujarat; failed in a big way in Kerala,  and in a small way in West Bengal. They  completely failed in Uttar Pradesh, Bihar and Jharkhand.  In many other smaller states also forecasts have gone awry. 

What went wrong?  

The main input for election forecasting is the vote-shares of different parties, which are estimated through large-scale sample surveys. As a matter of fact this is the only ¿scientific¿ part of forecasting.  The earlier article has highlighted the importance of three factors - the selection of a representative sample, developing a good questionnaire and properly conducting the fieldwork. The exit polls were weak on all these counts have now been proved conclusively. 


Election forecasting is possible only when the sample is adequate and representative is stating the obvious. NDTV claimed that its exit poll covered 213 constituencies with a total sample of 122 thousand. 213 out of 518 constituencies and 550-600 respondents per constituency are adequate perhaps more than adequate. But was the sample representative? 

Market Research agencies in our country seldom reach the far off places and below-the- poverty line section of the population. These agencies generally do not reach the vote banks of leaders like Laloo Prasad Yadav and Mayawathi is further confirmed by the election results of UP, Bihar and Jharkhand. At the same time the success of exit polls in Maharashtra and Karnataka pinpoints the areas of strength of these agencies. 

In sampling for exit polls the polling booth is the most important intermediate sampling unit. Randomised selection of such booths is essential to get a representative sample. In this election 30-40 percent of polling booths were declared ¿sensitive¿ or ¿hyper sensitive¿. This was not just in Bihar and UP but even in states like West Bengal, Andhra and Haryana. Did the agencies risk sending their interviewers to such booths or limited their activity to only safe booths? We do not know. Reaching these booths is risky and omitting such big portion means poor sampling.  


The questions asked to the respondents should elicit answers, which are reliable or consistent and are valid or relevant to the context. In exit polls only a few questions are asked but some of the poll discussions gave examples of questions, which prima facie fail the reliability test.  There were long and serious discussions on when voting decisions are taken; the framers of such questions could have tried whether they would have been able to answer such a question honestly. In another programme it was shown that in Delhi Shiela Dixit was more popular than Vajpayee but an equally relevant (or irrelevant) comparison of Shiela Dixit with Sonia Gandhi was not shown.  

Field Work  

The quality of survey research mainly depends on how well the fieldwork is conducted. The sample size for each phase of exit poll was over 25000 and the interviews with the respondents had to be completed in the course of 6-8 hours. This meant engaging over a 1000 interviewers and another 100-200 persons for supervision, data processing and transmission. A team of over a 1000 persons spread over 12-15 states, in 150-200 towns and villages; even thinking about managing such a large field force would be mind-boggling. What quality could one expect in such an exercise? 

The interviewers for such surveys apart from possessing basic interviewing skills should have integrity and more important should be political neutral. Interviewing skills cannot be instilled through perfunctory training programmes and cheating is very easy in such surveys. During election time even those persons who are apolitical in normal times become politically active and it is quite likely that the interviewers would have brought their political biases to the data they collected. This could be one reason for the pro-NDA bias in all the exit polls. 

A larger sample is not necessarily a better sample. The exit polls conducted during the earlier elections did not have such huge samples but gave better results.  

Casual Approach 

Election forecasting is serious business but it looks some of the pollsters do not think so. At least four areas where the pollsters appear to be deliberately transgressing the basic principles of research methods could be identified as:

Forecasting on the basis of truncated samples;

Projecting the figures for the state when polling has been completed only in

Subscribe To The Newsletter
The new term for self censorship is voluntary censorship, as proposed by companies like Netflix and Hotstar. ET reports that streaming video service Amazon Prime is opposing a move by its peers to adopt a voluntary censorship code in anticipation of the Indian government coming up with its own rules. Amazon is resisting because it fears that it may alienate paying subscribers.                   

Clearly, the run to the 2019 elections is on. A journalist received a call from someone saying they were from Aajtak channel and were conducting a survey, asking whom she was going to vote for in 2019. On being told that her vote was secret, the caller assumed she wasn't going to vote for 'Modiji'. The caller, a woman, also didn't identify herself. A month or two earlier the same journalist received a call, this time from a man, asking if she was going to vote for the BSP.                 

View More