AI-Powered Image Editing Preserves Privacy Without Compromising Aesthetics

Researchers from Japan, China, and Finland have developed a novel system that leverages generative artificial intelligence to protect image privacy while maintaining visual cohesion.

The system, called “generative content replacement” (GCR), replaces potentially sensitive parts of images with visually similar but AI-generated alternatives. In tests, 60% of viewers were unable to distinguish the altered images from the originals. The researchers presented their findings at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems, held in Honolulu, Hawaii, in May 2024.

As generative AI continues to permeate daily life, its potential impact on job security, online safety, and creative originality has sparked concerns and debates. However, the research team has proposed harnessing the image manipulation capabilities of generative AI to address privacy issues.

Balancing Privacy Protection and Image Aesthetics

Associate Professor Koji Yatani from the Graduate School of Engineering at the University of Tokyo explained that existing image privacy protection techniques often fall short in preserving both information privacy and image aesthetics. “Resulting images can sometimes appear unnatural or jarring. We considered this a demotivating factor for people who might otherwise consider applying privacy protection,” he said. “So, we decided to explore how we can achieve both — that is, robust privacy protection and image useability — at the same time by incorporating the latest generative AI technology.”

The GCR system identifies potential privacy threats and automatically replaces them with realistic but artificially created substitutes. For example, personal information on a ticket stub could be replaced with illegible letters, or a private building exchanged for a fake building or other landscape features.

Maintaining Visual Coherence and Enabling Safer Content Sharing

Compared to commonly used image protection methods such as blurring, color filling, or removing the affected part of the image, the researchers found that generative content replacement can better maintain the story of the original images and provide higher visual harmony. “We found that participants couldn’t detect GCR in 60% of images,” said Yatani.

While the current GCR system requires significant computational resources, making it unsuitable for personal devices at the moment, the team has developed a new interface that allows users to customize images and have more control over the final outcome.

Despite concerns about the risks of realistic image alteration blurring the lines between original and altered imagery, the researchers remain optimistic about the advantages of their system. “For public users, we believe that the greatest benefit of this research is providing a new option for image privacy protection,” said Yatani. “GCR offers a novel method for protecting against privacy threats, while maintaining visual coherence for storytelling purposes and enabling people to more safely share their content.”

The research, titled “Examining Human Perception of Generative Content Replacement in Image Privacy Protection,” was authored by Anran Xu, Shitao Fang, Huan Yang, Simo Hosio, and Koji Yatani, and presented at the CHI Conference on Human Factors in Computing Systems in May 2024.


Substack subscription form sign up