Akhil Kanthamneni’s summer internship at the Institute of Experimental Software Engineering (IESE) in Frankfurt, Germany proved fruitful. Having just completed the 11th grade, Kanthamneni focused on examining the vulnerability of image classification models. This work earned Kanthamneni a selection as a South Caroline Delegate to the 2026 American Junior Academy of Science (AJAS) conference in Phoenix, AZ.
According to Kanthamneni, image classification has a core problem. And that is “its lack of model robustness,” Kanthamneni says. “Models are trained on normal images and fail to classify correctly if images are slightly altered,” Kanthamneni says, adding, “the model must be reliable for all sorts of data.”
Adversarial attacks and visual corruptions represent the two types of data. Adversarial attacks refer to “small, nearly invisible alterations or noise that are made to misclassify images. A panda image with noise is classified as a gibbon,” Kanthamneni explains. Visual corruptions relate to real-world changes to the images such as “rain, fog, darkness, less pixelation,” Kanthamneni says, noting “models must be able to recognize objects even in imperfect conditions.” But as Kanthamneni explains, the models either recognize adversarial attacks or visual corruptions, but not both with accuracy.
The above contributed to Kanthamneni’s two primary research questions. For one, “how can the effectiveness of adversarial defenses be evaluated beyond the accuracy metric?” To answer this question, “a new hypothetical paper was used which listed 24 metrics, from which the researchers selected seven,” Kanthamneni explains. “The goal was to measure how adversarial defenses affected robustness toward all seven metrics.”
The second question pertained to how “do data augmentation schemes such as Augmix impact a model’s defense against adverse panda attacks?” Kanthamneni asks. To answer this question, Kanthamneni used Augmix to train the model, “which is an oversimplification of a small blur to the image.”
Throughout the research, Kanthamneni used both clean and corrupted images to evaluate “adversarial accuracy, clean accuracy, average confidence score and others.” Testing led to mixed results. Clean images led to an accuracy rating of 94%. However, “There was almost zero adversarial accuracy,” Kanthamneni says, explaining the “adversarial training models trained for defense decrease clean accuracy from 94% to 81% while achieving better adversarial accuracy, up from 44%.” When Kanthamneni is not working, he enjoys playing tennis, chess, and electric car racing. For relaxation, he enjoys long walks away from screens. As for future goals, Kanthamneni wants to do more research. “Maybe I’ll work in industry for a few years after graduation then return for a PhD.”