Manipulated communicates in the form of altered images and/or videos—so-called deepfakes—threaten to fundamentally undermine belief in the authenticity of visual artefacts (online). Deepfakes allow the face of a person in an image to be transferred to the face of another person, or to depict actions that a person has never taken in order to spread forms of disinformation, as well as hate and conspiracy ideologies. As advancing technologies in the field of AI have made deepfakes more accessible and easier to use, and, in many cases, users no longer recognise them as fakes, deepfakes can act as a catalyst for echo chambers.
Even though AI-based solutions already exist that have made enormous progress in recognising deepfakes, they are often trained on isolated contexts and are unable to capture the complexity of visual practices (of digital communication), or incorporate the semantic nuances of implicit patterns into their identification processes. The construction of meaning of visual artefacts is always embedded in social contexts of action, which are both prefigured by collective knowledge and entail certain practices of use. Within this context, this chapter aims to present a qualitative approach that promises to complement the existing quantitative AI-based approaches with a discourse-semiotic perspective.