As generative AI systems become more widespread, companies are striving to differentiate their chatbots from the competition. This has given chatbots the ability to not only create images but also edit them. Industry leaders like Shutterstock and Adobe are at the forefront of this development. However, these new capabilities raise concerns about unauthorized manipulation and theft of existing online artwork and images.
To combat unauthorized access, watermarking techniques have been used to protect images. Additionally, researchers at MIT CSAIL have developed a technology called “PhotoGuard” to prevent AI-driven image editing.
See also: Dell Inspiron 14 review: Dependable worker for college students and professionals
How does PhotoGuard work?
Photoguard introduces imperceptible changes to specific pixels in an image, creating “difficulties” that can be read by machines but are invisible to the human eye. The “encoder” attack method targets the AI’s latent representation of the image, thereby hindering its ability to interpret the image properly.
The “diffusion” attack method takes a more sophisticated approach, causing the AI to see the image in a completely different way. It defines a target image and optimizes the perturbation to mimic this target. When the AI attempts to edit these “immunized” images, the changes are applied to the simulated target images, leading to unrealistically generated results.
See also: Windows 11 bids goodbye to Windows Mail and Calendar apps, asks users to shift to Outlook
PhotoGuard offers some protection, but it’s not completely infallible. Malicious individuals may still attempt to detect protected images by adding digital noise, cropping or flipping the image. To tackle this problem, model developers, social media platforms and policy makers need to work together to build stronger defenses against unauthorized image manipulation. Salman, the lead author of the paper, stresses the need for ongoing efforts to make this protection more effective. Companies developing AI models should focus on building strong immunizations to protect against potential threats from AI tools in image editing.
See also: Apple rumored to be developing 20.5-inch foldable MacBook Pro, expected to launch by 2025
As generative AI systems become more widespread, companies are striving to differentiate their chatbots from the competition. This has given chatbots the ability to not only create images but also edit them. Industry leaders like Shutterstock and Adobe are at the forefront of this development. However, these new capabilities raise concerns about unauthorized manipulation and theft of existing online artwork and images.
To combat unauthorized access, watermarking techniques have been used to protect images. Additionally, researchers at MIT CSAIL have developed a technology called “PhotoGuard” to prevent AI-driven image editing.
See also: Dell Inspiron 14 review: Dependable worker for college students and professionals
How does PhotoGuard work?
Photoguard introduces imperceptible changes to specific pixels in an image, creating “difficulties” that can be read by machines but are invisible to the human eye. The “encoder” attack method targets the AI’s latent representation of the image, thereby hindering its ability to interpret the image properly.
The “diffusion” attack method takes a more sophisticated approach, causing the AI to see the image in a completely different way. It defines a target image and optimizes the perturbation to mimic this target. When the AI attempts to edit these “immunized” images, the changes are applied to the simulated target images, leading to unrealistically generated results.
See also: Windows 11 bids goodbye to Windows Mail and Calendar apps, asks users to shift to Outlook
PhotoGuard offers some protection, but it’s not completely infallible. Malicious individuals may still attempt to detect protected images by adding digital noise, cropping or flipping the image. To tackle this problem, model developers, social media platforms and policy makers need to work together to build stronger defenses against unauthorized image manipulation. Salman, the lead author of the paper, stresses the need for ongoing efforts to make this protection more effective. Companies developing AI models should focus on building strong immunizations to protect against potential threats from AI tools in image editing.
See also: Apple rumored to be developing 20.5-inch foldable MacBook Pro, expected to launch by 2025











