Meta plans to scan for “skin vibrations” to combat deepfakes
Meta’s Creepy Skin Deep “Security” Idea
https://reclaimthenet.org/metas-creepy-skin-deep-security-idea
This has been around for a while in research papers. Getting people’s pulse rate, and even blood pressure from videos.
Other things you can get from videos, electrical interference to determine which power grid somebody is using. Noises in the background can be mapped as well. So uploading a video deanonymizes you quite well, for properly motivated investigator.
In the escalating war against deepfakes however it will just be part of the arms race, and new deepfakes will now include those fluctuations.
The only other way to combat deep fakes is something that people and companies constantly fuck up: cryptography.
Yeah, it really shouldn’t be hard to digitally sign a video along with a number of the frames. We’ve had the tech for decades.
We’re starting to see it in some cameras, mostly for still photography, but I don’t see why the basic concept wouldn’t extend to video files, too. Leica released a camera last year that signs the photo, including the timestamp and location data, and Canon, Nikon, Sony, Adobe, and Getty have various implementations of the technique.
Once the major photo software editing workflows support it, we’ll probably see some kind of chain of custody authentication support from camera to publication.
Of course, that doesn’t prevent fakes in the sense of staged productions, but the timestamp and location data would go a long way.
Let’s see Mark Zuckerberg’s skin vibrations 🐍
Honestly, I don’t find this very creepy. This is information you are already putting out there for everyone to see. If I post a video of myself speaking, I am not concerned about people seeing how my skin vibrates in that video.
As video generation tools become more advanced, we will need better algorithms to validate videos. The bar for “fooling the vast majority of humans” is much, much lower than the bar for “being literally indistinguishable from a real video”. The main problem I see is that it’s going to be a cat-and-mouse game, and I don’t think any method you publish will remain valid for very long in practice. The same method will be used to improve the next version of video generators.
Also, lots of real videos use post-processing that might wash out some of the details they are looking for. Video producers might re-record lines so they don’t perfectly match the video to begin with. It’s been a long time since I used a Samsung phone, but on my old S6, I remember that it always had a beauty filter applied to the selfie camera that made me me look like a creepy porcelain doll. I could probably make a deepfake of myself that looks more “real” than those real videos and photos.
So we’ll have deepfaked skin vibrations?