One of the limitations of artificial intelligence tools is that they are known to provide inaccurate answers or "hallucinate" references. In order to combat this, you should become familiar with strategies for fact-checking such as lateral reading.
Information Type |
Fact-Check Strategies |
Factual verification |
Is this information true or false? 1) Use lateral reading (see more below) to open up a new tab and do a search for the facts from the response 2) Ensure you are checking the information from a source that has the expertise on that topic |
Logic checks |
AI tools are not experts in logic and can make errors. You can see this by asking an AI tool to answer riddles. When asking AI anything that requires "puzzling" out an answer, look for logical inconsistencies in the response. |
Citation checks |
AI tools can sometimes "make up" sources that do not exist. Search for sources either on Google, Google Scholar, or the Library to confirm they exist. |
Bias exploration |
Are there other perspectives that might be missing from the AI's response? Read the response with a critical eye and consider ideas that might be missing. One way to avoid receiving one-sided responses from generative AI is to craft responses that accurately reflect what you want to know. For example, if I wanted to search for information on Google about organic foods, I may type in "pros of organic foods". However, that will give me biased results that only focus on the positive aspects of organic foods. A better search query would be "positive and negatives of organic foods". Use well-known sites that specialize in fact-checking to verify controversial claims. |
See the videos below on specific examples that demonstrate fact-checking information from AI from University of Maryland Libraries.
Lateral reading asks you to open a new tab to look for more information on a source or claim:
Watch the video below for a step by step example!