Posted on

Startup can identify deepfake videos in real time

Startup can identify deepfake videos in real time

Real-time deepfakes are no longer limited to billionaires, public figures, or people with extensive online presences. Mittal’s research at NYU with professors Chinmay Hegde and Nasir Memon suggests a possible challenge-based approach to blocking AI bots on video calls that would require participants to pass some sort of video CAPTCHA test before joining.

As Reality Defender works to improve the detection accuracy of its models, Colman says access to more data is a key challenge to overcome – a common refrain among the current crop of AI-focused startups. He is confident that further partnerships will fill these gaps and, without giving further details, he suggests that there are likely to be several new deals in the next year. After ElevenLabs was linked to a fake voice call from US President Joe Biden, the AI ​​audio startup struck a deal with Reality Defender to curb potential abuse.

What can you do now to protect yourself from video call scams? Just like WIRED’s top advice for avoiding AI voice call scams, it’s important not to get presumptuous about your ability to detect video deepfakes to avoid scams. Technology in this area continues to advance rapidly, and any telltale signs you rely on now to detect AI deepfakes may no longer be as reliable with the next upgrades to the underlying models.

“We don’t ask my 80-year-old mother to report ransomware via email,” says Colman. “Because she’s not a computer science expert.” If AI detection continues to improve and prove to be reliably accurate, real-time video authentication may become as commonplace in the future as the malware scanner that sits quietly in the background of your email inbox humming to himself.

This story originally appeared on wired.com.