MIT researchers have developed PhotoGuard, a technique that uses tiny pixel alterations called perturbations to protect images from AI manipulation.
2
Attack modes
PhotoGuard employs two "attack" methods: the "encoder" attack, which makes AI models see the image as random, and the more advanced "diffusion" attack.
3
Tackling alterations
PhotoGuard tackles threats ranging from innocent image tweaks to harmful alterations that can impact market trends, public opinion, and personal images.
4
Diffusion attack
The diffusion attack is resource-intensive and requires substantial GPU memory. It takes on the diffusion model itself.
5
Preserve images
Adding perturbations before uploading an image can protect it from modifications, though the final result may lack realism compared to the non-immunized image.