Uncanny Returns: Trevor Paglen and the Hallucinatory Domain of Generative AI
Anthony Downey | MIT Press Reader | 23rd September 2024
In the world before ChatGPT, Paglen thought image generating models would be unimaginative. Now he is concerned about the opposite – that is, AI models can “hallucinate”. He warns that models can be biased when classifying images and that users put too much faith in “machine realism”. Of course, awareness of these issues has skyrocketed with the emergence of ChatGPT. Builders of these models are trying to ensure “non-toxic” outputs, a category they may define differently to Paglen.