Artificial Intelligence and tools like ChatGPT are developing technologies. This can impact how it's used and any policies. This guide will continue to be updated as new resources and ideas are published. Students should check with their professors and relevant university departments for guidelines.
As any technology evolves, there will be weaknesses that need to be considered and that impact how it can be used. These faults can also be improved as the tools are continuously developed & trained so these limitations may change.
Generative AI and tools such as ChatGPT suffer from these downsides including:
Limitation | Example |
---|---|
Inaccuracies or "hallucinations" |
There are many reports of false information in responses. Tools built around large language models are using words to "predict" accurate information and may make mistakes. EXAMPLE: ChatGPT can produce "fake" or "made up" citations when was asked to provide a list of sources on a topic. |
Not up to date |
Unless the tool is actively connected to the web, it will not be trained on current information which will impact its responses. EXAMPLE: When asked for current trends on a topic or for who the current President is, certain tools will not be able to provide that information. |
Bias of the training material |
Since the tools are trained on materials written by biased humans, the response may also be bias in some way. EXAMPLE: If asked to create images of CEO's or prisoners, the people in the images will reflect stereotypical images like those we may see in our modern media, which continues harmful biases. |
Transparency of information/Source Evaluation |
We do not know exactly what information is used in training data. The tools are also not "searching" the training data like a search engine or database. The content is completely stripped of context and authority. EXAMPLE: When you search on Google for current trends in legal scholarship, you are able to evaluate if the person writing is a legal scholar, a law student, or someone completely removed from the field. |
Information behind paywalls | Generative AI tools do not have access to information behind paywalls, which is frequently more quality information than what is freely accessible to tools that access the web to provide responses. |
Limits on conversations | Due to its capabilities to be used for nefarious purposes, many generative AI tools have "guardrails" which prevent it from answering certain types of questions, including those related to politics, "nonsense", and other sensitive topics. |