The term is deep fake - a quick search will bring up loads of paid courses and a free trial of Linked In learning will get you Understanding the Impact of Deepfake Videos but there are bound to be decent MOOCs floating about. Deep fake audio is the easiest to crank out but deep fake video is more advanced than most people realise.
Anyone can download an app and make a deep fake video these days but, like all forms of storytelling, most of it is unskilled and not at all convincing if you know what to look for. The thing is deep fakes will get a lot better fast. We have to think of it in the same terms as ‘reality’ being perception. Our mind does a kind of shorthand that makes life easier day-to-day but leaves us vulnerable tto tricksters who understand where the loopholes are. This is kind of a meta version of that. Magicians and propagandists are both seeking to deceive us with things that look real… and yes, shit posters too. I see it as a reminder to go to source whenever possible and also to check our own memories and assumptions.
Deep fakes are certainly not all bad though. They are increasingly used to automate free online learning. However, like with all media, we need to have some agreements about what is ok and what isn’t.
We can see these extremes of benefits and harms exploding into our lives in facial recognition. Clearwater AI is in the news due to a court ruling in Australia and the revelation that it was being considered by authorities. The same is happening in the UK. In the US the American Civil Liberties Union weighed in with concerns (about time) just as Clearwater AI are about to get their patent.
Facial recognition has a lot of potential in research though eg. for studying endangered wildlife unobtrusively; and also in public health where it becomes a double edged sword if it’s used in an authoritarian context.
We hear a lot about China’s social ranking system which is the concerning model from a civil liberties standpoint mainly because of the lack of transparency but also because AI still suffers badly from embedded human bias and so my guess is that it’s likely to result in whatever minority groups there are being misidentified, stereotyped and criminalised if they haven’t guarded against that. In Singapore there are plans to trial facial recognition software to track Covid-19 cases; and Taiwan’s early stage management of the pandemic is worth reading about from a technical and civil liberties standpoint.