For many, these are scary words.
First, what are deepfakes and digital twins? Simply explained, digital twins are replicas in the virtual (or digital) world of people, buildings, farms, etc. These twins are normally built in collaboration with the person or entity owner. Deepfakes, however, are unauthorized digital twins developed by malicious creators – and in recent history, such manufactured products have become so good that the human eye can’t detect between what’s real and what’s fake.
Earlier this year, I was a guest on Bernard Marr’s podcast. Bernard talked at length about his own digital twin, who’s been trained to interact with people online as him and even answer emails on his behalf. If he’s not available, you can still interact with his digital twin, which to some degree would mimic and say and share what he would normally do. According to Barr, his digital twin has drastically increased his bandwidth.
Let’s look at the positive side of digital twins: for powerhouses like Marr, or celebrities eager to engage and connect with their legions of fans, the investment into a digital twin would definitely increase that opportunity.
But there seems to be more concern and conversation around the not-so-positive side of digital twins. When you start seeing former president Barack Obama on video saying things he’s never said, when you start seeing Tom Cruise playing rock-paper-scissors on the street, when you see the Ukrainian president Zelensky urging his people to “lay down your weapons and go back to your families,” is it any wonder why people are so concerned about the malicious use of such technology?
How do we know if the person we’re looking at or watching – or worse, the person we think we’re talking to – is actually the person we believe we’re connected to?
How do we know if it’s them or a deepfake?
Increasingly, we can’t tell the difference between deepfakes and reality. AI has learned to understand how someone speaks. How their body might move. What their ticks and tells are.
And that’s concerning.
Tech companies aren’t shutting their eyes to this. Microsoft launched a video authenticator tool two years ago that was designed to determine whether an image or video has been artificially manipulated, detecting the blending boundary of the deepfake and subtle elements the human eye may not see. This certainly helps to authenticate videos and images that are so powerful they’re easily and quickly believed.
As AI gets better, so do our detection systems. It’s a never-ending cycle to keep uncovering the unique patterns behind AI technology so that we can improve our detection. We have to keep working so we can validate authentic content.
It’s a long game, and it’s a race at the same time. For now, we start by authenticating one another.