Experts Concerned as AI Detectors Label Real Videos as Fake

( – Concerns regarding online censorship continue mounting with the 2024 general elections looming around the corner and questions still circulating about the integrity of the last presidential election. Making matters worse, a recent report states that a new artificial intelligence (AI) detector labeled real videos as fake.

In mid-November 2022, the California-based Big Tech company Intel introduced the world’s first real-time deepfake detector. The platform, called FakeCatcher, uses a technology first explored during the 1930s called photoplethysmography (PPG). That system measures the light reflected or absorbed by human blood vessels. The detector also analyzes eye movement to detect fake videos.

Intel Corporation claims FakeCatcher can detect fake clips in real-time with an accuracy rate of 96%. BBC News North American technology reporter, James Clayton, decided to test that claim.

Clayton used “a dozen or so” videos of President Joe Biden and former President Donald Trump from its archives for the test. Some were authentic, while the Massachusetts Institute of Technology modified others, turning them into deepfakes.

The BBC article noted that the system worked “pretty good” at detecting deepfake videos. It managed to catch all but one altered clip.

However, the system didn’t work well at picking out real videos. Clayton reported that FakeCatcher identified several genuine clips as fake during testing. The detector showed a decreased capacity to correctly identify authentic videos if they were pixelated or otherwise degraded. It apparently experienced difficulty monitoring blood flow using PPG in those instances.

Additionally, FaceCatcher doesn’t analyze audio, meaning bad actors could alter the words spoken by individuals without creating changes in blood flow or eye movement. Inversely, that means that authentic audio clips could be labeled as fake.

American website Reclaim the Net posted an article discussing the real-world implications of the current state of AI detectors. It warned that its tendency to falsely label authentic videos as deepfakes could lead online censors to block or remove genuine content if it relied on that new system. That’s not encouraging news with a major election cycle looming around the corner.

Copyright 2023,