Applying epistemology to social media, deepfakes, and the spread of information.
If you only ever heard people agree with you, how would you know if you were wrong? In the digital world, the 'truth' you see is often custom-built just for you.
In epistemology—the study of knowledge—an echo chamber is an environment where a person only encounters information or opinions that reflect and reinforce their own. Imagine standing in a room where every wall is a mirror; you don't see the world, you only see yourself. On social media, this happens when we follow people who think like us. Over time, our ability to think critically shrinks because we aren't challenged. We begin to believe that everyone thinks like we do, making outside ideas seem not just wrong, but 'crazy' or 'evil.' This is a form of confirmation bias, where we favor information that proves us right and ignore evidence that proves us wrong.
Quick Check
What is the main reason an echo chamber makes it difficult to find the truth?
Answer
It prevents us from seeing opposing viewpoints, leading us to believe our limited perspective is the whole truth.
1. You watch one video about a 'secret moon base.' 2. The algorithm notes your interest (). 3. Next, it suggests three more conspiracy videos because they have high engagement scores. 4. Within an hour, your entire feed is about space conspiracies, making it feel like 'everyone' is talking about it.
Quick Check
According to the engagement formula, why might a lie spread faster than a fact?
Answer
Lies are often designed to trigger a stronger 'Emotional Response,' which increases their 'Visibility' in the algorithm.
Most people read vertically—they stay on one page, looking at the 'About Us' section or the professional design to judge if it's true. Professional fact-checkers read laterally. This means they leave the original site immediately and open multiple new tabs to see what other reliable sources say about that site. If a website claims a 'new cure' was found, a lateral reader doesn't read the site's testimonials; they search for the claim on Google, Wikipedia, or news sites. They ask: 'Who is behind this information?' and 'What is their motive?'
1. You see a video of a politician saying something shocking. 2. Instead of re-watching the video (vertical), you open three new tabs (lateral). 3. Tab 1: Search the quote. Tab 2: Check a major news outlet. Tab 3: Search 'Video [Politician Name] [Date] Fact Check.' 4. You discover the video was a deepfake—an AI-generated fake—because no other source reported the event.
You find an article by the 'Institute for Healthy Living' claiming sugar is good for kids.
1. You search the name of the Institute.
2. You find a financial report showing the Institute is funded by a major soda company.
3. You apply the logic: $Reliability = rac{Evidence}{Bias}$.
4. Since the bias is high and the evidence is funded by the interested party, you reject the source as unreliable.
What is the primary goal of a social media algorithm?
If you are 'reading laterally,' what are you doing?
A 'filter bubble' is something that users choose to join because they want to avoid arguments.
Review Tomorrow
In 24 hours, try to explain the difference between 'vertical reading' and 'lateral reading' to a friend or family member.
Practice Activity
The next time you see a 'breaking news' post on social media, open two new tabs and find two different news organizations reporting the same story before you share it.