MIT’s PhotoGuard alters photos to stop AI systems from tinkering with them
Jul. 27, 2023.
1 min. read.
8 Interactions
Limited to Stable Diffusion hacks for now
A new tool called PhotoGuard, created by researchers at MIT, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated, reports MIT Technology Review.
“If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been “immunized” by PhotoGuard, the result will look unrealistic or warped.”
A second method, a diffusion attack, “disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model.”
For now, PhotoGuard works reliably only on Stable Diffusion, so users’ old images may still be available for misuse.
“In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, a professor at MIT who contributed to the research. But a more effective approach would be for tech companies to add it to images that people upload into their platforms automatically.”
Citation: Hadi Salman et al. Raising the Cost of Malicious AI-Powered Image Editing. arXiv:2302.06588
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
0 Comments
0 thoughts on “MIT’s PhotoGuard alters photos to stop AI systems from tinkering with them”