Professional growth          Court news           Productivity           Technology          Wellness          Just for fun

How to recognize deepfakes and AI-generated images

While artificial intelligence (AI) has made a positive impact on technology in many ways, one downside is the proliferation of AI-generated images.

AI-generated media, commonly called deepfakes, can create realistic images of fake events, which sows confusion and creates public distrust. Deepfakes can even impact legal proceedings and the legal industry at large.

Given the prevalence of deepfakes, legal professionals have no choice but to learn how to deal with them.

This requires understanding what AI-generated images and deepfakes are, the problems they can cause, and how to spot them. Let’s talk about it.

What are AI-generated images and deepfakes?

Artificial intelligence generally refers to digital tools that can replicate human intelligence for the purpose of problem-solving. We have some ideas about what this looks like based on science fiction movies, but the reality is generally much more mundane.

In practice, AI tends to work by analyzing large amounts of data, recognizing patterns, then acting on that pattern recognition.

This is a very useful technology. AI has proven to work well for automating many tasks and streamlining workflows, while also increasing accuracy. You might even use AI-assisted tools in your legal practice today.

AI-generated images are a more recent phenomenon, where AI can create an image or video that is not a real image or work of a human artist.

This process can be used to alter and improve existing images, such as using a red-eye reduction tool to correct retinal glare or using AI-assisted software to colorize and restore an old photo.

AI-based tools can also create entirely new images that incorporate various elements from different sources, such as works of art.

The real dangers of AI-generated images arise with deepfakes — still images, audio files, or videos with both audio and visual components manipulated that depict fake events. Common examples include politicians or celebrities making statements or taking actions they did not actually make or take.

Deepfakes were first created in 2017 when a user of the Reddit platform posted doctored images that inserted celebrity faces into explicit materials. Since then, deepfakes have become more sophisticated…and more problematic.

What problems do deepfakes cause?

Deepfakes can be used for a wide array of nefarious purposes like reputation smearing, election manipulation, or hoaxes.

They can also be used for financial fraud or phishing, such as with deepfaked voice messages that appear to come from authority figures. Deepfakes could also be employed for extortion or blackmail, where the threat is the release of a fake but nonetheless harmful image.

An even broader issue with deepfakes is the societal mistrust they create in all sources of information, even legitimate news sources.

Persons caught in misdeeds in legitimate photos, videos, or audio recordings can now claim that the media in question is a deepfake.

Plus, a widespread inability to discern fact from fiction could result in doubt surrounding almost all events. For instance, would you hesitate to act on news of a major emergency in your area if you had already seen several similar incidents turn out to be hoaxes?

Legal industry challenges with deepfakes

Deepfakes pose challenges for both the legal system and the legal profession itself.

If evidence can be fabricated with deepfakes, attorneys and courts need to be able to recognize these fabrications.

On top of that, law firms and legal professionals themselves can be victimized by deepfake scams just like any other businesses or professionals.

Authentication of evidence will be more challenging with the prevalence of deepfakes. In a 2019 child custody dispute in Britain, a woman submitted a doctored audio recording that made it sound as if her husband was making threats. The fake was only revealed when the husband’s attorneys studied the metadata on the recording.

While the doctored audio in that case was a “cheapfake” — the techniques used were less sophisticated than AI — it still highlights a potential challenge in trial courts with deepfakes. There may need to be heightened requirements for authentication of audio, video, or photographic material as evidence that courts can accept.

In addition, expert testimony challenging such evidence may become more common.

Law firms themselves can also be targets for cybercriminals and scammers. A legal practice is tasked with safeguarding a great deal of confidential and sensitive information, including client and case-related information. This means that law firms must also be alert to the potential for cybercriminal attacks related to deepfakes.

How legal professionals can recognize deepfakes

In light of the risk posed by deepfakes, legal professionals have no choice but to develop the skills to recognize them.

Here are a few telltale signs that you may be viewing a deepfake image or video.

Unusual eye and body movements

AI has a particularly difficult time replicating a person’s eyes blinking in a natural way. It is also challenging to replicate natural eye movements, since a person’s eyes usually follow the person to whom they are speaking.

A lack of blinking or unnatural eye movements are common signs that you’re looking at a deepfake video.

Unnatural body shapes

Another common sign of deepfakes is a weird body shape, like when a person appears distorted when they turn to the side or move their head. The person’s body may also be positioned inconsistently with their head — a common mistake, since deepfake technology often focuses on faces instead of bodies.

Additional red flags are hair and teeth that do not appear realistic.

Hair will look abnormal due to a lack of frizzy hair or any individual strands that are visible. For teeth, a viewer may not be able to see outlines of individual teeth, or there might be teeth positioned in strange places.

For AI-generated photos, hands will often give them away. Look for deformities in the hands and fingers, and in group photos, look for extra limbs peeking out from behind people that don’t seem to belong.

Inconsistent color or lighting

Inconsistent lighting is another giveaway for deepfakes, such as shadows in strange places or light sources that move without explanation.

You may also see mismatches in color, such as unusual skin tones or stains. In videos, watch for items that seem to change color without explanation, such as a person in the background that goes from wearing a blue shirt to a gray one.

Approaching deepfakes with the right mindset

The first step in recognizing any deepfake is acknowledging that it’s possible.

At the same time, you don’t want to assume that every piece of video, audio, or photographic evidence is a fake.

As a legal professional, you should understand the real possibilities of AI-generated media so that you can develop a healthy, realistic skepticism about what you see and hear. There’s no need for paranoia that you can’t trust anything you see, but having absolute faith in every piece of media is just as unrealistic.

Like it or not, deepfakes are simply a reality of the modern digital landscape. Legal professionals would do well to accept this reality and learn to deal with the potential deepfake threat — before paying the consequences.


  • Mike Robinson

    After a fifteen-year legal career in business and healthcare finance litigation, Mike Robinson now crafts compelling content that explores topics around technology, litigation, and process improvements in the legal industry.

    View all posts

Our recommendations

Follow InfoTrack