Recently, I ran into a personal project that required me to identify several Twitter accounts. One of the first things usually done is reverse-searching the profile picture. My method to reverse search images is usually three-step.
Tip: check all image sizes and dimensions that are available.
- TinEye Reverse Image (Chrome browser extension)
- Google + Bing Reverse Image Search
- Finally, I use Yandex’s reverse image check.
For most cases, one of these three searches will turn up something that will lead you to the next step. If not, I then check EXIF data if possible and/or use Archive.org to find removed images for that profile.
Remember: Twitter, Facebook, and Instagram strips EXIF data.
For this specific project, everything turned up empty using my normal strategy. In my experience, I can assume it’s very likely these are AI-generated images. It’s also challenging to 100% confirm a photo is generated by AI. However, examining the photos can give you some clues and significant red flags.
Methods of Creating Deepfake Pictures
StyleGAN (generative adversarial network) is a project launched by Nvidia in 2018 and is now open-source. It’s the most popular, widely available method of creating AI-generated images. StyleGAN makes it straightforward for any user to generate a profile picture that’s 100% unique.
StyleGAN2 was released in February of 2020, which made the images even more realistic by moving blemishes and artifacts that were obvious in StyleGAN1.
How to Detect if an Image is Auto-Generated by GAN
Several tools such as Fastai and paid software can help identify if an image is autogenerated. I’ll touch on those tools in a future article. Regardless, if you are familiar with the indicators and flags that are common with generated images, you can make a well-educated guess.
Using Python Stylegan2 on Github, I put together an interactive “Which person is real” quiz along with indicators of each fake image to help you identify generated images in the future.