Skip to Main Content

Student Guide to Generative AI (ChatGPT)

This research guide provides definition, information, and resources for students to understand the basics of generative AI and ChatGPT including concerns, limitations, and opportunities.

Disclaimer

Artificial Intelligence and tools like ChatGPT are developing technologies. This can impact how it's used and any policies. This guide will continue to be updated as new resources and ideas are published. Students should check with their professors and relevant university departments for guidelines.

Limitations of Generative AI

Limitations of Generative AI (Like ChatGPT)

As any technology evolves, there will be weaknesses that need to be considered and that impact how it can be used. These faults can also be improved as the tools are continuously developed & trained so these limitations may change. 

Generative AI and tools such as ChatGPT suffer from these downsides including:

  • Inaccuracies or "hallucinations":
    • There are many reports of false information in responses. This includes "fake" or "made up" citations when ChatGPT was asked to provide a list of sources on a topic. Tools built around large language models are using words to "predict" accurate information and may make mistakes. 
  • Not up to date:
    • Unless the tool is actively connected to the web, it will not be trained on current information which will impact its responses. 
  • Bias of the training material:
    • Since the tools are trained on materials written by biased humans, the response may also be bias in some way.
  • Transparency of information/Source Evaluation:
    • We are also not able to know exactly what it is being trained on to determine the validity of that information
    • Not all tools provide Since we are unaware of what 
  • Information behind paywalls:
    • For those tools that do access the web for material, it still will not be making use of quality data that may be behind a paywall. 
  • Limits on conversations:
    • Due to its capabilities to be used for nefarious purposes, many generative AI tools have "guardrails" which prevent it from answering certain types of questions, including those related to politics, "nonsense", and other sensitive topics.