Developing Innovative Elixir for Graph Data Poisoning

Developing a defense for a new backdoor attack will make training a federated graph learning (FedGL) framework safe from present and future dangers has earned an 91制片厂 Tech researcher a Best Paper Award at the .
Binghui Wang, assistant professor of computer science, and his collaborators earned the Best Paper Award in the conference鈥檚 Artificial Intelligence Security and Privacy Track for The paper was submitted to the ACM鈥檚 Special Interest Group on Security, which accepts about 20 percent of submissions after a rigorous peer review.
鈥淲hat excites me most about this project is how it masterfully bridges the gap between deep theoretical rigor and practical accessibility,鈥 Wang says. 鈥淭he provable defense mechanism is both elegant in its mathematical foundation and effective in real-world applications鈥攚hile remaining comprehensible to the general public. It represents a rare and valuable achievement in AI security research.鈥
FedGL allows multiple users to train a shared algorithm with their graph data, while maintaining the privacy of that data. Problems could arise should a bad actor intentionally inject data to skew the results of the algorithm.
The research team developed its new attack called optimized distributed graph backdoor attack (Opt-GDBA), which is embedded in the training graph data. The new attack learns a customized trigger that is deployed by a smart system that finds the best spots to hide the malicious information, as well as adapt to different types of networks. This technique resulted in a 90 percent success rate across different data sets.
鈥淭he Opt-GDBA is an optimized and learnable attack that considers all aspects of FedGL, including the graph data鈥檚 structure, the node features, and the unique clients鈥 information,鈥 Wang says.
The team further developed a provable defense against this new backdoor attack, which can be applied to deter other attacks. It works by breaking all incoming graph data into smaller pieces. Each piece is run through a mini detector, which determines whether that piece looks suspicious and uses a mathematical proof to guarantee the system will catch the attack.
The research team鈥檚 defense blocked every Opt-GDBA attack, as well as maintained more than 90 percent of legitimate data.
鈥淭he most significant challenge was developing a provable defense robust against both known attacks and future unknown threats capable of arbitrarily manipulating graph data,鈥 Wang says. 鈥淥ur team leveraged more than five years of pioneering work in provable defenses for AI models and systems by combining insights from robust statistics to develop attack-agnostic certification frameworks, graph theory to design topology-aware robustness bounds, and through collaborative research by partnering with domain experts in cybersecurity and AI.鈥
Wang was joined by , a Ph.D. student at Jilin University and 91制片厂 Tech; , full professor of computer science at Jilin University; assistant professor of information sciences and technology at Pennsylvania State University; and , associate professor of computing at the University of Connecticut and former assistant professor of computer science at 91制片厂 Tech.