What to do when you encounter online content (image, audio, video or text) that you suspect could be AI-generated to convey disinformation? These tips come from the expertise of the EDMO’s network.

A quick checklist for reliable information

– Check fact-checking websites: has the same content already been verified by independent fact-checkers? If not, often you can submit the content for verification to fact-checkers through dedicate channels (e.g. messaging app numbers)

– Check traditional media: is the news conveyed by the content confirmed by other reliable independent sources?

– Check the source of the content for warning signs (e.g. is it an anonymous account created/activated just a few days ago? Is it an account that consistently shares the same kind of AI-generated content? Is it an account that has no interaction in its comment sections?)

– Use tools to detect AI-generated content but do not rely exclusively on them: they are known to make mistakes.

AI-generated images

– Check the image for details that hint at it not being real, such as hands, fingers, eyes, shades, shapes, colors, background details, etc. Current technology often makes mistakes (improperly called “hallucinations”) in creating those details. – Use reverse image search (a tutorial here) on search engines to find the origin of the image, i.e. when and where it appeared online for the first time.

AI-generated audio

– Listen carefully to the audio and pay attention to choices of words, intonation, breaths, unnatural pauses and other elements that can manifest anomalies.

AI-generated video

– Check the quality of the video, e.g. lip-sync, out of focus contours, unrealistic features etc., to spot alterations.

– Check the synchronization of audio and video, to spot the eventual addition of a fake audio on a real video. – Use reverse image search (tutorial) to find the original video.

AI-generated text

– Do not take for granted the accuracy and veracity of answers from LLMs: they often miss the necessary information background to be thoroughly updated, and they are known to make mistakes and occasionally invent facts out of thin air (“hallucinations”).