Beckhoff

New 'chameleon' AI tech uses invisible digital masks to block facial recognition

13 November 2024

A new AI-powered tool can protect users from online identity theft and privacy breaches.

The innovative model creates invisible digital masks for personal photos to thwart unwanted online facial recognition while preserving the image quality.

Anyone who posts photos of themselves risks having their privacy violated by unauthorised facial image collection. Online criminals and other bad actors collect facial images by web scraping to create databases.

These illicit databases enable criminals to commit identity fraud, stalking, and other crimes. The practice also opens victims to unwanted targeted ads and attacks.

The new model is called Chameleon. Unlike current models, which produce different masks for each user’s photos, Chameleon creates a single, personalised privacy protection (P-3) mask.

A bespoke P-3 mask is created based on a few user-submitted facial photos. After applying the mask, protected photos won’t be detectable by someone scanning for the user’s face. 

Instead, the unwanted scan will identify the protected photos as being someone else.

The Chameleon model was developed by Professor Ling Liu of the School of Computer Science (SCS) at Georgia Tech, PhD students Sihao Hu and Tiansheng Huang, and Ka-Ho Chow, an Assistant Professor at the University of Hong Kong and Liu’s former PhD student.

During development, the team accomplished its two main goals: protecting the person’s identity in the photo and ensuring a minimal visual difference between the original and masked photos.

The researchers say a notable visual difference often exists between the original and photos using current masking models. 

However, Chameleon preserves much of the original photo’s quality among various facial images.

In several research tests, Chameleon outperformed three top facial recognition protection models in visual and protective metrics. The tests also showed that Chameleon offers more substantial privacy protection while being faster and more resource efficient.

In the future, Huang says they would like to apply Chameleon’s methods to other uses.

“We would like to use these techniques to protect images from being used to train artificial intelligence generative models. We could protect the image information from being used without consent,” he says.

The research team aims to release Chameleon code publicly on GitHub to allow others to improve their work.

“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” says Liu.


Print this page | E-mail this page


Tesys 100

This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.