Skip to Main Content

Generative AI at UM-Dearborn


When using Generative AI tools like Chat GPT or UM-GPT, you always need to verify the information, because tools like these will sometimes "hallucinate" or make up information.  These tools will confidently tell you incorrect information even though it is false. You should employ some of the same strategies you might use to evaluate information in your everyday life to find reliable information to Generative Ai Tools. 


One fact checking strategy, SIFT, can be applied to information you might generative through UM-GPT and other text-based generative AI tools. 



THe SIFT Method

What are you thinking after reading this source? What do you know so far? 

First, when you first hit a page or post and start to read it — STOP. Ask yourself whether it is important that you take a deep dive into the information provided. Are you using it for a class assignment or your job? Does false or misleading information have the potential to cause harm or have serious consequences? You might also want to stop and check you emotions here. Are you feeling upset, or angry by what you read? Why? Can you set aside your emotions temporarily to investigate the claims of the information source? 


For AI-generated content you might also want to STOP at this point and ask yourself, do I want to engage with this content? IT is worth my time or do I have ethical concerns? If you are inputting a prompt and potentially using the information generated, also ask "Is this a good use case for an LLM or generative AI?" You should also consider if a prompt you are inputting into a tool like UM-GPT might generate a biased answer and be prepared for that. 

What can you find out about the publisher/distributor/website? Who is the author?

You want to know what you're reading before you read it. You don’t have to do a Pulitzer prize-winning investigation into a source before you engage with it. But if you’re reading a piece on economics by a Nobel prize-winning economist, you should know that before you read it. Conversely, if you’re watching a video on the many benefits of milk consumption that was put out by the dairy industry, you want to know that as well.

This doesn’t mean the Nobel economist will always be right and that the dairy industry can’t be trusted. But knowing the expertise and agenda of the source is crucial to your interpretation of what they say. Taking sixty seconds to figure out where media is from before reading will help you decide if it is worth your time, and if it is, help you to better understand its significance and trustworthiness.

When it comes to AI this can be difficult. While some Generative AI tools will cite their sources, most will not. What are the implications of this? If it does include cited sources, can you verify that they are real?  You now know that Generative AI tools will provide false or misleading information. Should you still find them reliable? What are the consequences if a particular part of the output is incorrect or biased? 


This is also a another good point to educate yourself about an LLM or AI tool and find out what the particular one you are using does well and what are its issues. 

Can you find this same information from other credible sources? 

Think about the actual claim your source is making. You'll need to figure out whether the claim is true or false, and whether the claim reflects a consensus viewpoint rather than a contested or controversial one. In this case, your best strategy may be to ignore the source that reached you, and look for trusted reporting or analysis on the claim.  For many topics, there may be a better source than UM-GPT or other Generative AI tool for information. Can you find something better with verifiable information? 

Understanding the context and history of a claim will help you better evaluate it and form a starting point for future investigation.

What is the original published source of this information? Does it match what your source says? Are other information sources linked to directly?

Much of what we find on the internet and generated through LLMs has been stripped of context. 

When in doubt, see if you can trace the claims back to their original source, such as the original research paper or the full post. You can do this by looking at citations, searching in library databases, or looking for other sources on the open web that discuss the same issue or claim. Not all LLMs will accurately cite sources, so take a look at any citations given. Can you find them or better sources than these? 

When outputs from Generative AI tools lack sources, you need to do your own research to verify information. Can you find a source with similar claims that you can apply this SIFT method too? 


Credits: Caulfield, M. (2019, June 19). SIFT (The Four Moves). Hapgood.

University of Michigan - Dearborn Logo
  • 4901 Evergreen Road
    Dearborn, MI 48128, USA
  • Phone: 313-593-5000
  • Contact us