The best detection tool for picking out AI-generated text and image is almost certainly the human brain. Research suggests that we do significantly better than chance at detecting a shift from human-generated to AI-generated copy, and that results improve with practice and training. Research on political deepfakes also indicates that analytical thinking is positively associated with the ability to correctly identify generated or altered content, as is a high level of political engagement.
Identifying AI in writing, visual images, and video, then, rests on the same foundation as identifying human-generated mis-, dis-, or malinformation: critical analysis of content and context. The same tools that help us spot fake news articles or social media posts created by humans will help spot fakes created by AI.
Following is a quick take on a few warning signs of misleading materials, and some strategies for confirming or disproving a text's credibility. Explore in greater depth in our research guide Misinformation, Disinformation, and Malinformation.
Signs of bot-written text include repetition, vagueness, and inaccuracies. AI-generated text will still fool some people (and AI detectors), and it does not take much human editing to bring it up to a human-produced standard, so it's important not to rely solely on your sense of whether it's "AI wrote it" or "I wrote it."
CNN: Bot or not? How to tell when you’re reading something written by AI; reports on peer-reviewed research done by Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Sherry Shi, and Chris Callison-Burch at the University of Pennsylvania
NPR: How generative AI may empower political campaigns and propaganda
As of now, AI still has trouble with some representations, especially of human beings. (AI's images of people often have the wrong number of teeth or fingers. Look back at the image of the child generated by starryai on the welcome page of this guide for an example of AI's problem with hands.) Generated images and video continue to improve, however, and AI detectors for visual material are as flawed as those that try to identify AI text.
Business Insider: How to Detect AI-Generated Content, Including ChatGPT and Deepfakes
NPR: 4 tips for spotting deepfakes and other AI-generated images
Tools have sprung up that claim to detect the use of artificial intelligence in text, image, and video. Unfortunately, they are playing catch-up, and they make mistakes both in ascribing AI help to human-created material and in missing AI use in generated material.
MIT Technology Review: AI-text detection tools are really easy to fool
Digital Trends: Even OpenAI has given up trying to detect ChatGPT plagiarism
These and other tools are worth a look. We checked the accuracy of these detection tools using one text example generated by Chat GPT and one written by a human. Results were spotty. (Interestingly, the AI detector that did the worst job in our tests was the one from ChatGPT's creator, OpenAI. The company must have agreed, as they've deleted the software.)
Detecting AI-created images with AI is in its infancy, and of course, it is a step behind the image creation softwares. A major roadblock is that, since every AI image generator uses different methods to produce images, an AI that can detect all of them would need to be trained on every possible algorithm. One project that is available for public beta testing is HuggingFace's Umm Maybe AI image detector, which is intended to detect AI in art.
Though technologies for detecting the use of AI in video (aka "deepfakes") have been developed, only one appears to be available for public use. Microsoft created the Video Authenticator in 2020 but privacy and bias concerns have limited its rollout; more recently, Intel demonstrated FakeCatcher, which has not yet launched as a product.
You can try the open-source DeepFake-o-meter, a project of researchers at the University of Buffalo, by going to their project homepage.