Evaluating Information Sources


Read this short introduction on reading laterally as a way to evaluate information, and watch the short video.

  • Newell, C. (n.d.). Jessup Playbooks: How do I read laterally?: Home. Retrieved August 3, 2022, from PVCC.edu
  • Stanford History Education Group (Director). (2020, January 16). Sort Fact from Fiction Online with Lateral Reading, from YouTube


When we're doing research or trying to learn something, we obviously want to gather content that is true. And so the key question is, how do we know content is true or false, or more or less true, or something along those lines. Basically, truth and veracity can have nuances that are important to identify or disentangle within stories that are presented about how things are in the world. For example, it's become very common for people in the public sphere to refer to narratives, and to argue that someone's or some group's narratives are either true or false. Or more accurately, a fiction or a non-fiction. Regardless of our political or whatever positions, this focus on narrative is interesting and worthwhile. It's interesting because placing information within a narrative has ancient origins, and it's worthwhile because it places true and false content within some context, like a story, which can itself be evaluated as fairly true or not (or fiction or non-). Essentially, this is important because stories are central to human communication and understanding (Fisher, 1989).

Before we explore this, I want to acknowledge that you may have been taught about various existing frameworks, like the CRAAP test, to help determine the veracity of content. Many of these frameworks ask us to ask questions and to check off boxes about the content itself when evaluating information or information sources. For example, who is the author of the content, where is the content published, on what platform? What is the author's or publisher's motivation? Is the motivation purely or mostly financial? How does that introduce bias? What is the date of the publication? Is it outdated? And so forth.

Those kinds of questions are important, but they may also be insufficient in determining some content's veracity. What's also important to identify, especially for content that is new or new to us, are questions that place the content in broader context, to fit it within an overall story that's taking place. For example, where does some content that we're evaluating fit within a story someone or some group is trying to tell? How does one group's (or author's, or publisher's, etc) story conflict with another group's (etc.)? Alternatively, what is the consensus among different stories that group's tell about a thing? How do these stories compete for public acceptance?

When we try to identify the story, then new questions and frameworks open up to us. For example, it could be the case that the basic facts about an event are agreed upon by various storytellers (e.g., news articles, politicians, scientists), but that the basic facts are presented in ways that impact how the story of those facts are told. Then, in those different tellings, the stories may consequently ring true or not. Worse, if the overall story rings false, then it may cause doubt about the basic facts, even if those basic facts are true.

Fortunately, we have ways to think about stories, and Walter Fisher (1989) identified two methods for evaluating narratives that he called narrative probability and narrative fidelity. The latter concept describes how well something rings true. For example, given what you know about the kinds of engineering feats that people have achieved, does it ring true that we have landed on the moon?

There's more to fidelity than that, but let's discuss narrative probability in more detail:

Narrative Probability

For a story to be probable (probably true, that is), it must satisfy three criteria:

  • argumentative or structural coherence
  • material coherence
  • characterological coherence

Argumentative or Structural Coherence

Argumentative or structural coherence speaks to the validity of an underlying argument in a story. That is, good stories, whether true or not, present a series of premises that build off each other and that present an overall thesis or argument. Or, stories have a structure that makes sense and, upon close inspection, contains few holes. You all know if a story has argumentative or structural coherence because you have all surely watched bad movies that fail to convince you that the movie made sense. Sometimes this lack of structural coherence in a movie is what makes a movie fun ("it's so bad it's great!"). But when telling stories about how things are in the world, or how they should be, or how they will be, it's important for the story to make sense and to be valid, logically.

Material Coherence

If a story is internally consistent, what next? As is often the case, a group (e.g., ideological ones) may compete with another group about what should be the dominant narrative. In such cases, we can compare and contrast their narratives, and doing so is testing their material coherence. This is the idea that we compare and contrast stories and note in the process whether "important facts may be omitted, counterarguments ignored, and relevant issues overlooked" in the process of comparing and contrasting them (Fisher, 1989, p. 47). Such comparisons may happen from a bird's eye view; for example, when Democrats or Republicans are telling the story of the United States, can we see what important facts each group omits, which counterarguments each group ignores, and which relevant issues each group overlooks?

These comparisons may also happen at the micro-level. In the social and physical sciences, scholars and researchers are engaged in a series of discussions with each other about all sorts of theories about the social or physical worlds.

A quick note on theories. Theories are simply very rigorous explanations given the data. These explanations generally provide an account of causality, or of how one thing causes another. As social and physical scientists gather data, they develop theories (like stories) that explain the data. As new data is analyzed, those theories are tested. If the theories no longer explain the additional data, they are revised or discarded and replaced with new theories. Some theories (like the theory of general relativity) are quite stable (i.e., well tested), even if they do not explain everything about things within their purview, like gravity at the quantum level. Other theories still walk a tight rope (like string theory), and this is most likely because more data is needed to test them but enough data has been analyzed for them to hold for the time being. In the social sciences and physical sciences, it's more common for theories to explain limited phenomenon. These are called middle-range theories. The diffusion of innovation theory, for example, is a middle-range theories that was originally devised to explain how new ideas and technology spread.

You can see these discussions among scientists and researchers take place in the literature review and discussion sections of journal articles. In these sections, researchers cite and refer to others who have completed research on a similar or the same topic. The overall goal of these discussions is to test or develop theories that explain some phenomenon. Essentially, researchers in the sciences are seeking to provide a story, based on a rigorous analysis of the data they have, that explains some phenomenon, and in the process of doing so, they compare and contrast their explanations with others. As Fisher might say, they seek important facts that may have been omitted, counterarguments that may have been ignored, and relevant issues that may have been overlooked.

Practically, we can use tools to help immerse ourselves in these discussions. Tools like Zotero or other reference managers aid us in collecting sources, taking notes on those sources, and citing those sources in papers that participate in these ongoing discussions and contribute to this collective storytelling. In the process of writing about the phenomenon under review, we attempt to provide a story based on the back and forth discussions that have taken place on the topic. These RMs, then, basically help us to test material coherence.

Characterological Coherence

Can you think of a story that doesn't have a character at all? I can't. Even places and things can be characters in stories. The Delorean car in the Back to the Future movies is, for example, somewhat of a character in those movies. But when people are characters, they often behave, in movies, plays, etc., characteristically. That is, they behave according to their values, beliefs, attitudes, ideas, and words. They behave according to who they are.

We don't often see movies or plays or hear stories where people behave uncharacteristically because such stories are generally not good. Or if people do behave uncharacteristically, it's usually a part of the plot and the uncharacteristic behavior was foreshadowed somewhere earlier in the story. Often, when we see something coming in a movie before it's happened, it's because the characters are playing to their character. If we don't see something coming, then it's simply likely a complex character, and the story supports that.

In any regard, stories have characters, whether people, places, or things. And when we investigate stories (or theories), it's worthwhile to consider whether the characters are coherent in this way, too. You can put this to the test. When thinking about big issues taking place in the world, think about who is involved, how they are acting, what they are saying, etc. Is what they are saying make sense, per the above ideas?

Closing the Loop

How does this all help you evaluate information sources? Well, when you collect sources for a project, like a paper or something, it's a good idea, as you collect sources, to think about the stories that are being told around the topic, as well as the story you want or need to tell. The sources you collect should have structural, material, and characterological coherence, too. This means that in the process of collecting evidence, if you find evidence that degrades the coherence of these things, you need to revise your story, just like a scientist must revise or discard their theory in the face of contradictory or discordant evidence.


You can test this method yourselves. Consider a news event that's going on as you read this. Pick two news articles that cover the event, from two different publications. Choose two publications that lie on different parts of the political spectrum. Study those two sources, and ask the following questions:

  1. How are the stories structurally coherent? Or not? That is, do the stories in the two articles make sense? Are they internally consistent?
  2. How are the stories materially coherent? Or not? That is, upon comparing them, do any of them leave out important facts? Do any of them respond to counterarguments (or alternate explanations that fit the data)? Do any of them overlook relevant issues?
  3. Are the characters in the stories characterologically coherent? Who are the characters in the story? Do they behave according to what we know of their values, attitudes, beliefs, ideas, and words? Do the stories present them counter to what we already know of these people?


Being able to evaluate information is important, but if we have been exposed to lessons on doing so, we have often been presented with some kind of framework that asks us to check the boxes to see if an information source satisfies some pre-existing criteria. For example, such a checklist might ask us to ask: who are the authors? do they have a good reputation? is the publisher respected? is the motive to publish based on profit? are they selling something with the information they provide?

While that may be an important part of the process in evaluating information, it's also insufficient. A more thorough way to evaluate information is to think of the information more broadly and holistically and how topics are presented differently based on narratives. That is, to think of the stories that are being told, and how the information is contextualized. Once we see the story, we have four methods to investigate a piece of information's credibility. We can read laterally. That is, we can read multiple articles on a topic and compare them. Then we can test each stories':

  • argumentative or structural coherence
  • material coherence, and
  • characterological coherence.

By the way, this method can also apply to the stories we tell ourselves or about ourselves or about our beliefs. It's important, that is, to evaluate the information we believe to be true as much as it is to evaluate what others posit to be true.

Regarding lateral reading, I think services like Wikipedia, ChatGPT, and Bard should be included in sources when reading laterally. That is, if we want to learn more about a topic, we can look at the standard sources, such as news articles or research articles. However, we can also refer to Wikipedia or the AI chatbots to sound out what we are reading about. The more we read about a topic from a variety of sources, the more likely we'll get a better overview of the stories being told about the topic.


Fisher, W. R. (1989). Human communication as narration: Toward a philosophy of reason, value, and action. University of South Carolina Press. https://doi.org/10.2307/j.ctv1nwbqtk