Reading time: 6 minutes
How does one develop a data-driven mindset when viewing informal pre-election Philippine emoji poll results? Despite the clear theme of this blog post being the May 2022 elections, we believe that you can apply these lessons—especially the critical thinking skill—in your daily life.
Taking a pause is powerful
In this digital age, you’re more likely to be exposed to pre-election informal interest checks or emoji poll surveys via the internet.
Results from these surveys reach our feeds oftentimes faster than we’re able to process them. So it’s crucial to be aware of where the data comes from, and how it’s validated then presented.
Let’s do a quick exercise!
Have you participated in emoji poll surveys before? You know the ones where each emoji stands for a specific candidate. These are common on platforms like Facebook where there’s a range of emoji reacts at the bottom of each post: love, like, care, sad, etc. Twitter has its own form of “reacts” whereby the choices are “retweet” if you prefer this person or “like” the tweet if you prefer the other. Either way, social media has been filled with posts like these in the months leading up to the May 2022 elections.
But when you participate in these informal emoji surveys, what is your thought process before selecting your emoji? Do you react based on the candidate of your choice? Do you react based on what the majority are reacting with? While these are interesting questions, there are actually more crucial ones we must ask before we even engage with these surveys and other informal ones like them.
In the next 30 seconds, list down all the questions a person with a data-driven mindset would ask regarding these informal surveys and the informal surveys’ results—usually in bar graph or pie chart format.
Interrogate them, interview them in your mind.
Answer key:Who exactly is reporting the data?
- Is the source or organization official or credible?
- Has the source shown support or bias towards specific candidates in the past?
- What incentive might the source gain from the results they presented?
How was the data collected? Format. Location. Quantity.
- How was the question phrased?
- Where are the participants located or distributed?
- Do we have enough participants for reliable results?
Aside from the demographics of the sample, it would also be good for the source or organization to provide the stats used and regional sub aggregates. This would give us an idea on how much rigor and care went into producing the survey.
Additional questions to think about:
- Are we certain that all Facebook users who participated only used one account?
- How do we account for the possibility of fake Facebook accounts?
Note that it’s impossible to answer these last two questions. Therefore, the results of Facebook emoji surveys are not considered truthful or accurate data. You should not let these results impact your decision-making.
The method matters.
This impacts how you should interpret the results. So ask the right questions.
Meet margin of error and confidence level
Let’s level up and learn about these two statistics concepts related to this topic!
If we wanted to conduct a national survey, it would be impossible for us to survey all 110.8 million Filipinos (called a population).
Instead, we will survey only a group of people (called a sample) that we think best represents the entire population (all 110.8 million Filipinos).
The composition of that group of people plays a big role in the results of the survey.
Since we are only conducting the survey to a group of people, we acknowledge that the results we get from here could vary from the value of the population. It’s important that the source or oranization acknowledges the possibility that the results they have shared may not be the actual/true value. It is an indicator of the organization’s maturity. It also means they know their statistics, an important component to interpreting results of surveys.
Value used by official opinion poll organizations: A ± 3% margin of error with a 95% confidence level.
Margin of error: How much room for error there is in the sample value you have presented given you are only conducting a survey to the sample and not the population
Confidence level: How confident you are that the right/true/actual value is within the range you presented
For example, a poll result shows 45% will vote for candidate A with a ± 3% margin of error and 95% confidence level. It means there is a “high likelihood that the candidate could win anywhere between” 42% and 48% of the vote, that is considered the reasonable range.
You’ve reached the end of this quick read! Here are the skills we’ve highlighted today: critical thinking and digital mindfulness.
What are the key takeaways from this blog post?
- Taking a pause is powerful.
- The method matters.
- It helps to understand statistics concepts to deepen our critical thinking skills.