news-details

Attack simulator reveals oversight in AI image recognition tools and mitigation for cyber threat

Artificial intelligence can help people process and comprehend large amounts of data with precision, but the modern image recognition platforms and computer vision models that are built into AI frequently overlook an important back-end feature called the alpha channel, which controls the transparency of images, according to a new study.

Researchers at The University of Texas at San Antonio (UTSA) developed a proprietary attack called AlphaDog to study how hackers can exploit this oversight. Their findings are described in a paper written by Guenevere Chen, an assistant professor in the UTSA Department of Electrical and Computer Engineering, and her former doctoral student, Qi Xia '24, and published by the Network and Distributed System Security Symposium 2025.

In the paper, the UTSA researchers describe the technology gap and offer recommendations to mitigate this type of cyber threat.

"We have two targets. One is a human victim, and one is AI," Chen explained.

To assess the vulnerability, the researchers identified and exploited an alpha channel attack on images by developing AlphaDog. The attack simulator causes humans to see images differently than machines. It works by manipulating the transparency of images.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market