MIT's PhotoGuard alters photos to stop AI systems from tinkering with them

2023-07-27
1 min read.
Limited to Stable Diffusion hacks for now
MIT's PhotoGuard alters photos to stop AI systems from tinkering with them
MIT's PhotoGuard alters manipulated images as protection (credit: MIT)

A new tool called PhotoGuard, created by researchers at MIT, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated, reports MIT Technology Review.

"If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been “immunized” by PhotoGuard, the result will look unrealistic or warped."

A second method, a diffusion attack, "disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model." 

For now, PhotoGuard works reliably only on Stable Diffusion, so users’ old images may still be available for misuse.

"In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, a professor at MIT who contributed to the research. But a more effective approach would be for tech companies to add it to images that people upload into their platforms automatically."

Citation: Hadi Salman et al. Raising the Cost of Malicious AI-Powered Image Editing. arXiv:2302.06588



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!