Skip to Main Content

Faculty Guide to Generative AI

This will be a disappointment to many who are accessing this page in search of a foolproof way to detect the use of AI: AI detection software (including software that uses AI) does not work well and is easy to trick. Pace ITS does not recommend or support AI detection tools. Neither do we here in the library.

This does not mean that students will be able to have gen AI do their assignments with impunity. As we've pointed out elsewhere in this guide, AI may be able to write a paper (or a lab report, or an annotated bibliography, etc.), but it can't write a good paper without a great deal of human guidance. The amount of thought, organization, and editing that a student would have to do to transform an AI-generated paper into an A paper is comparable to what the student would do simply writing the A paper.

AI does not have the capacity for human-level analysis, original thought, or nuance -- not yet, and probably not for a long time into the future. This kind of violation of academic integrity comes with its own punishment: the final grade.

The Best AI Detection: Information Literacy

The best detection tool for picking out AI-generated text and image is almost certainly the human brain. Some research suggests that we do significantly better than chance at detecting a shift from human-generated to AI-generated copy, and that results improve with practice and training. (Other research warns against overconfidence, however: see Jakesch et al. 2022, for one.)  Research on political deepfakes also indicates that analytical thinking is positively associated with the ability to correctly identify generated or altered content, as is a high level of political engagement.

Identifying AI in writing, visual images, and video, then, rests on the same foundation as identifying human-generated mis-, dis-, or malinformation: critical analysis of content and context. The same tools that help us spot fake news articles or social media posts created by humans will help spot fakes created by AI.

For a more in-depth look at the subject of detecting AI, and what university instructors can do about it, look at "Generative AI detection in higher education assessments" (Ardito 2024).

Following is a quick take on a few warning signs of misleading materials, and some strategies for confirming or disproving a text's credibility. Explore in greater depth in our research guide Misinformation, Disinformation, and Malinformation.

Signs of Misinformation

  • Language: Hyperbolic, inflammatory, or extreme language is a red flag for misinformation. AI can err on the opposite extreme -- bland and noncommittal language.
  • Authorship: Is there a named author? Does that person have a track record?
  • References: Does the author cite sources for the information reported? Can you find them yourself?

Strategies for Identifying Misinformation

  • Lateral reading: Are other outlets reporting on the same issue? How are they describing it?
  • Fact-checking: Conduct your own research on the factual claims of the article. AI is prone to making up its facts.
  • Examine the logic: Does the article display circular reasoning, conflate causation and correlation, or rely on other logical fallicies?

Other Signs of Generative AI

Text

Signs of bot-written text include repetition, vagueness, and inaccuracies. AI-generated text will still fool some people (and AI detectors), and it does not take much human editing to bring it up to a human-produced standard, so it's important not to rely solely on your sense of whether it's "AI wrote it" or "I wrote it."

CNN: Bot or not? How to tell when you’re reading something written by AI; reports on peer-reviewed research done by Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Sherry Shi, and Chris Callison-Burch at the University of Pennsylvania

NPR: How generative AI may empower political campaigns and propaganda

Images & Video

As of now, AI still has trouble with some representations, especially of human beings. (AI's images of people often have the wrong number of teeth or fingers. Look back at the image of the child generated by starryai on the welcome page of this guide for an example of AI's problem with hands.) Generated images and video continue to improve, however, and AI detectors for visual material are as flawed as those that try to identify AI text.

Business Insider: How to Detect AI-Generated Content, Including ChatGPT and Deepfakes

NPR: 4 tips for spotting deepfakes and other AI-generated images

Tools to Detect AI

Tools have sprung up that claim to detect the use of artificial intelligence in text, image, and video. Unfortunately, they are playing catch-up, and they make mistakes both in ascribing AI help to human-created material and in missing AI use in generated material.

MIT Technology Review: AI-text detection tools are really easy to fool

Digital Trends: Even OpenAI has given up trying to detect ChatGPT plagiarism

 

Detecting AI in Text

These and other tools are worth a look. We checked the accuracy of these detection tools using one text example generated by Chat GPT and one written by a human. Results were spotty. (Interestingly, the AI detector that did the worst job in our tests was the one from ChatGPT's creator, OpenAI. The company must have agreed, as they've deleted the software.)

Detecting AI in Images and Video

Detecting AI-created images with AI is in its infancy, and of course, it is a step behind the image creation softwares. A major roadblock is that, since every AI image generator uses different methods to produce images, an AI that can detect all of them would need to be trained on every possible algorithm. One project that is available for public beta testing is HuggingFace's Umm Maybe AI image detector, which is intended to detect AI in art.

Though technologies for detecting the use of AI in video (aka "deepfakes") have been developed, only one appears to be available for public use. Microsoft created the Video Authenticator in 2020 but privacy and bias concerns have limited its rollout; more recently, Intel demonstrated FakeCatcher, which has not yet launched as a product.

You can try the open-source DeepFake-o-meter, a project of researchers at the University of Buffalo, by going to their project homepage.