Amplifying The Uncanny
Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations.
Item Type | Conference or Workshop Item (Paper) |
---|---|
Additional Information |
This work has been supported by the UK’s EPSRC Centre for Doctoral Training in Intelligent Games and Game Intelligence (IGGI; grant EP/L015846/1) |
Keywords | Artificial Intelligence, Machine Learning, Deepfakes, The Uncanny, Generative Adversarial Networks |
Departments, Centres and Research Units | Computing |
Date Deposited | 30 Jun 2020 09:58 |
Last Modified | 18 Jun 2021 23:07 |
Explore Further
- http://xcoax.org/ (Organisation)