The rise of instant information has changed the way we consume knowledge, and not always for the better. When research required library trips and cross-referencing encyclopedias, every answer was earned, deliberate, and meaningful. Today, information flows freely, but much of the effort required to engage with it has vanished. The consequence is the erosion of media literacy and critical thinking, leaving us increasingly ill-equipped to sift through the overwhelming flood of information we encounter daily.

New tools aim to make finding information even more passive, diminishing the need for any critical thought in the process. Apple’s recent introduction of natural language search in the App Store is a prime example, promising to make searching as intuitive as talking with a friend. Ask for “fitness apps to help me stay in shape,” and instantly, results appear. It sounds seamless, even magical, but this convenience is a double-edged sword. By relying on casual, conversational phrases instead of deliberate keywords, users hand over control to the algorithm. Suddenly, it must interpret their intent, filling in blanks: What type of fitness apps are needed? Should they focus on yoga, weightlifting, or macro tracking? Should they be free, or are paid apps acceptable?

In filling those blanks, algorithms inevitably introduce biases. Advertisers, for example, may have paid for top placements, regardless of relevance. The algorithms themselves reflect the values and assumptions of their creators, unintentionally (or intentionally) prioritizing certain results while excluding others. This curates an output that may seem neutral but is shaped by unseen influences.

The convenience of natural language search is undeniable, but it comes at a cost. How do these tools decide what’s relevant? Whose biases shape the results? These questions often go unasked, leaving us passively trusting outputs that deserve scrutiny. And when these tools fail, delivering irrelevant suggestions or outright misinformation, the cracks in the system become glaring. But the bigger problem is what happens when people stop noticing the cracks at all.

The Role of Media in Feeding Complacency

Media outlets are only too eager to exploit our dwindling critical thinking skills. Sensationalized headlines and oversimplified narratives dominate, feeding a cycle of clicks and shares while barring readers of nuance. Context gets stripped away, false gossip is spread and statistical claims go unexamined. This isn’t a new phenomenon, but in the age where all answers are thought to be a Google search away, it’s worse than ever.

Even scientific reporting, once considered a hallmark of rigor, is plagued by misinterpretation and overhyped findings. The replication crisis, a widespread inability to reproduce published results, has eroded trust in academic research. Misuse of statistical tools like p-values contributes to this problem, but the media’s tendency to oversell studies with low statistical power only compounds it.

Even experts are not immune. Consider Jeff Hancock, a Stanford professor specializing in deception and technology. His reliance on AI in a professional setting backfired when “hallucinations“, false but convincing information generated by AI, slipped through. These hallucinations occur when a language model like ChatGPT generates content that appears factual but is entirely fabricated or distorted. For instance, Hancock’s testimony included references and claims that were plausible on the surface but were not grounded in real data or sources, highlighting the fundamental risks of depending on AI without verification. Hallucinations in AI stem from the way large language models (LLMs) are designed. These systems don’t “know” facts in the way humans do. Put simply, they operate based on probabilities. Trained on vast datasets of text, LLMs predict the next word in a sequence by analyzing patterns, context, and statistical likelihood. If you’ve ever typed a message and had your phone autocomplete the next word, you’ve seen a simplified version of this at work.

This process allows LLMs to generate coherent, human-like language, but it also makes them prone to errors. If the data they rely on is incomplete, inconsistent, or biased, the model will “fill in the gaps” with plausible-sounding fabrications.

Hancock’s failure to fact-check the AI output is emblematic of a larger issue, a lack of widespread understanding of how AI tools work. Many users, including professionals, treat these tools as infallible because their outputs appear polished and authoritative. This reveals a critical gap in education about AI. To use these tools responsibly, we need to understand their mechanics: how they generate results, where they excel, and where they fall short.

There isn’t a guaranteed method to spot AI “hallucinations” if you don’t already have a solid grasp of the subject. These made-up facts often feel logical and well-written, which is what makes them tricky to detect and especially dangerous. That said, there are some ways to help safeguard yourself from being misled:

Leverage Sources You Trust:

Use AI models that allow you to upload your own documents and sources to use as a reference for its output. For example, creating a custom GPT trained on specific books or documents by authors you trust is a helpful way to get more reliable answers. Tools like Google’s Notebook LM are also helpful for research since they cite sources directly in their AI output, enabling you to verify information with a click. But be very cautious with answers not tied to a verified source, like Google’s AI-generated snippets at the top of the search page. They’re often polished but can be hilariously wrong or misleading. (For a laugh, look at this compilation of some of the worst Google AI responses.) Always validate answers against primary sources. AI can be a good tool for navigating primary sources, but it is not a primary source itself.

Fact-Check with Other AI Models:

Another thing you can do if you’re unsure about an AI-generated response is to ask the same question to a different AI model. The likelihood of two models generating the same hallucinated response is incredibly low.

Each AI model is trained on different datasets and uses distinct parameters and techniques to generate responses. For example, OpenAI’s ChatGPT may rely heavily on one set of training data, while Google’s Bard or Anthropic’s Claude might emphasize another. Even when their datasets overlap, their architecture and algorithms interpret and synthesize information in unique ways. This divergence makes it nearly impossible for two models to fabricate the exact same falsehood unless both are pulling from the same erroneous source.

You can think of it as comparing two chefs making soup from different recipes. One might overdo the salt, while the other might leave out the carrots, but the odds of them making the same mistake in the exact same way are slim. When two models independently corroborate an answer, it’s a good sign the information is more likely reliable. If their answers differ significantly, that’s a cue to dig deeper and verify with primary or external sources.

Learn to Spot Statistical Tricks:

Even accurate data can be misleading when framed poorly. AI (and humans) might present statistics out of context or cherry-pick data to make a point. A basic understanding of statistics, enough to question sample size, correlation vs. causation, or what a p-value actually signifies, is vital.

Triangulate Information:

Always compare the AI’s response to other reputable sources. If independent sources corroborate the information, you’re likely on safer ground.

Convenience alone cannot justify turning off our critical thinking. Tools like ChatGPT or natural language search simplify tasks, but we must remain vigilant about the costs of that simplicity. Blind trust in these systems risks replacing intellectual rigor with algorithmic dependence. Understanding the strengths and limitations of these tools is essential. It empowers users to question their outputs, identify biases, and use them as a supplement to human reasoning rather than a substitute for it.

At its core, this issue is about preserving our ability to think critically in a society dominated by instant information. Media literacy, the ability to analyze, question, and contextualize information, is vital for navigating the world. Without it, we risk becoming easy targets for manipulation by biased algorithms, corporate agendas, or misleading narratives.

The stakes are high. Public health misinformation, distorted political discourse, and unchecked corporate influence all thrive when critical thinking falters. Tools like natural language search have their place, but they should complement human judgment, not replace it.

The path forward lies in education. Teaching users not only how to engage with AI but also how to think critically about its outputs. The future demands this commitment to intellectual vigilance, and it starts with how we educate ourselves today.

Support the Broken Science Initiative.
Subscribe today →

Leave A Comment

recent posts