The Newsletter

Let's talk polls: We got a bad one!

December 1, 2023

So, there’s a new poll that we have to discuss, and it’s this one that was published in Publimetro yesterday.

Encuesta.
Source: Publimetro

In context

Before digging into my thoughts on this poll, I just want to describe what it shows and place that in the context of what we know about polling in Mexico’s 2024 presidential election.

Here’s what the polls have told us so far…

The polls to date

For months, polls have been released covering public opinion around the presumptive candidates for president.

  • There have been variations in polls: Early on, it was not clear that Senator Xóchitl Gálvez would win the opposition coalition’s nomination process, so support for other potential candidates and generic candidates was polled. Similarly, there were questions over whether former Foreign Minister Marcelo Ebrard would be the Movimiento Ciudadano (MC)’s candidate or if that would fall to Samuel Garcia — or some other candidate.
  • However, the results have largely stayed steady. Across nearly all polls, the results have been relatively stable. Claudia Sheinbaum (or some alternative generic Morena candidate) led the race, an unsurprising outcome given strong public support for President Andrés Manuel López Obrador. The opposition coalition’s candidate came in second, which also was unsurprising given that the coalition is composed of what were once Mexico’s largest political parties. Finally, the MC candidate — whether García or Ebrard — polled third.  

A significant change?

Support for the MC has been relatively strong since presidential polling began. Still, it has not been anywhere near what was estimated in the poll displayed on the front page of Publimetro.

Noting that, one assumption is fair from the outset: This poll is an outlier.

In context, here are some other recent polling results by notable pollsters. Something just doesn’t seem right. Right? Right.

Recent polling in Mexico's 2024 presidential election with Sheinbaum leading Galvez and Garcia

Yes, the Publimetro poll is the latest poll (the Parametria poll was conducted over a long period in October and November). However, regardless, shifts in public polling simply do not occur this quickly. There are exceptions, but they come from major political scandals or other sudden upheavals.

The only major recent political change in Mexico is that the path for Samuel García to be the MC’s presidential candidate is wide open. It is nearly certain that he will be the third major candidate seeking the presidency. That’s insufficient to change public sentiment and double one candidate’s support.

So, what’s going on

Let’s take a quick step back. I’ve been thinking about Mexican politics for the better part of a decade at this point. Over that time, I’ve developed a good sense of the Mexican polling ecosystem — and likely what’s happening in this poll.

There’s a likely explanation for this poll’s results. However, I’ll build some suspense and leave that explanation for the end. Instead, let’s partake in a critical assessment of this poll one step at a time.

First of all, it could just be an outlier

Polling is tricky because statistics is tricky. Any poll, no matter its size, has a margin of error.

Margins of error provide a range of likely “true” values for an entire population. Polls cannot assign a precise “true” value for the entire population because, ultimately, they are estimates. In the electoral context, pollsters seek to randomly ask voters who they plan to support in an election.

Warning: This is a *long* sidebar explaining why polls sometimes produce outliers. You can definitely skip this bit if you want. Just jump down to the next horizontal divider.

To explain further, let’s simplify things, let’s change the context a bit. To do so, let’s pretend we are in a concert hall containing 1,000 people. If the band playing the gig wanted to play a song that the most people in the audience of 1,000 people, they could do a poll at the door to ask their audience.

The band would have several options. They could ask everyone, but that would be really labor-intensive and impractical. To save time and manpower, they could ask a smaller random sample of the audience and make some inferences from there. So, that’s what they decide to do: poll the audience at the door.

Our band could poll 10 people, 50 people, 100 people, and so on. The more people they poll, the closer they will likely be to approximating the preferences of their entire audience. The smaller their sample, the larger their margin of error will be. One way to think about this — and to change metaphors briefly — is to think about flipping a coin. We know that each time we flip a coin, there’s a 50 percent chance the coin lands on one side (heads) and the same chance it lands on the other (tails). In practice, however, we know that it is possible to flip a coin 10 times we might get heads 6 times and tails 4 times or some other uneven result. Does that mean flipping a coin is uneven? No, it means that randomness can skew results away from the expected 50/50 distribution in a small number of trials.

Similarly, in our concert hall, if the band polls only a small group, they might not get an accurate representation of the audience's preferences. This is analogous to the coin toss — a small sample might not accurately reflect the true distribution of preferences.

Just as with more coin flips, with more audience members polled, the band's understanding of the audience's preferred song will likely become more accurate. Thus, the margin of error in a poll is like the unpredictability in a small number of coin tosses: Both decrease with larger sample sizes.

Ultimately, any poll of a sample can prove to be an outlier, much like any 10, 100, or 1,000 flips of a coin can produce outcomes that are not exactly 50/50.  

A poll’s margin of error indicates the upper and lower bounds that account for (typically) 95% of the broader population, tests, coin flips, etc. Through statistics, pollsters approximate the positions of the broader population based on a smaller sample. However, just like with coin flips — no matter the number — there’s always a possibility that pollsters poll a sample of the broader population that is not random and, therefore, does not correctly reflect the broader population. The smaller the sample, the harder it is to infer things about the broader population. In our concert example, if only 10 concertgoers are polled, there’s a chance they could all say the same song, much like how it is possible — albeit unlikely — that a coin could land on heads every time it is flipped for 10 times — and much less unlikely — 100 times.

Much like with more coin flips, there remains a possibility that we don’t actually poll a representative sample when we poll our voters, even if the selection is random. This is because every sample, no matter how randomly chosen, might not perfectly reflect the entire population due to chance. This results in skewed polling results since the population that we inferred our estimates from was, in fact, not truly representative of the broader population of voters.

The margins of error in polling results account for the natural variability expected in a sample due to the randomness of selection. They indicate the range within which the true value for the whole population is likely to fall, given the results from the sample.

In most cases, the margin of error is associated with a 95% confidence interval. This means that if the same poll were conducted 100 times, in 95 of those polls, the true value would fall within the margin of error.

The margin of error gives a range (for example, plus or minus 3 percentage points), and we can say with about 95% confidence that the true figure lies within this range. The remaining 5% chance is that the true population falls outside the estimated range, meaning the poll was an outlier.

Ultimately, any poll can be an outlier. There’s always a chance. And that’s worth keeping in mind when we discuss weird polls (like that which was published in Publimetro and spurred this post).

I mentioned polling errors to cover my bases. However, looking at the Publimetro poll, there's reason to believe that the showing of support for Samuel García was not, in fact, a polling error. The odds of this being the case are very low. They are even lower when you consider the next two points noted in this post.

Poll quality

The second factor to consider is the quality of the pollster. In this case, the poll in Publimetro was conducted by Territorial, a pollster that I had never heard of until I saw this poll.

[We’ve already discussed pollster quality in Mexico on Next Sexenio. One of the first posts covered initial polls and pollster quality.]

Listen, I don’t know everything. But I know which pollsters have good reputations in Mexico.

So, when I saw this poll, I had some questions, and that took me down a bit of a rabbit hole, which ultimately firmed up my belief that the poll was not particularly credible.

First, Publimetro does not run its own polls. So, this poll had to come from somewhere. Most reputable pollsters in Mexico publish their polls independently or have longstanding relationships with major news publications. Publimetro does not fit into either group. So, investigating the post, I saw that the poll was conducted by a firm called Territorial. That was the first red flag: I had never heard of this pollster before.

I did some digging and found that they only started polling in 2021. That’s not great for judging its reliability since we have very little track record of which to assess its past performance.

However, the company appears to have been around a bit longer (Territorial is the pollster’s commercial name, and the legal name has been around longer). This put me down a much deeper and interesting rabbit hole, which I won’t detail but only led me to further dismiss the poll. But ultimately, the company which actually conducted the poll is a software and IT services firm. Hm. Interesting. Not necessarily bad. It’s just interesting.

Source: National Electoral Institute

The second red flag was the methodology. The poll page says that it uses a proprietary technology for polling. It doesn’t provide any real insight into how that works. There are a number of respondents, but we have no indication of if it was my phone, internet, person, and so on. Not great. The approach to polling matters, and it matters quite a bit in Mexico. (I discussed that a bit in my initial post on polling in Mexico.) So, I remain skeptical.

Accidentally being a bad pollster isn’t that bad of a thing. It’s hard work, even for those who are good at it. However, I’m not even so sure that explains what is happening here.

Intentionally bad polling?

Regularly during Mexican elections, polls emerge that seem way out of step with others. This one appears to be much like those.

Periodically, pollsters that no one knows publish survey results with what is effectively the aim of shaping public opinion. Currently, that’s looking like it’s the case here.

Why? Showing a surge of support for a candidate could shift public opinion towards the preferred candidate by misleading voters into believing that a poorer-performing candidate actually stands a chance of winning the race.

Moreover, Publimetro would not make it onto my ranking of top news sources in Mexico. Often, the newspaper, which is free, appears to serve paid promotional content or seeks to grab attention with misleading content. Another point that I didn’t notice but was made by a mutual follower on X, Rodrigo Castro Cornejo (who happens to be a political science professor at UMass), is that the political party logo placed on Xóchitl Gálvez’s polling numbers on the front page was that of the PRI. Gálvez, while part of a coalition including the PRI, is affiliated with the PAN. It would be the equivalent of marking Claudia Sheinbaum with the PVEM party’s logo rather than Morena’s.  

The whole thing is off. My assessment is that we have a case of intentionally bad polling. It’s what the evidence points to.

Final points

This was quite a long post covering one poll, all for it to be revealed that it almost certainly was bad polling at the end. While going through this poll in detail required a lot of time — for you to read and me to write — my hope is that it elucidates why tracking polling ahead of Mexico’s election requires attention to detail and careful tracking.

Fortunately for you, dear reader, that’s what Next Sexenio will continue bringing on the coming months leading up to the election. Buckle up and make sure to subscribe and share this newsletter with a friend.

Until next time.

Thanks for reading Next Sexenio. If you enjoyed it, please ensure you are subscribed, and consider liking or commenting on this post to make it more visible!

If you really like this newsletter, we’d appreciate it if you were to share it with some friends of colleagues who also might be interested. It’s easy. Just tap the button below!