Binghui (Alan) Wang

alt text 

Assistant Professor
Department of Computer Science
Illinois Institute of Technology
Email: bwang70@iit.edu
Office: Stuart Building, 216C, 10 W 31st St, Chicago, IL 60616
PhD advisor: Neil Zhenqiang Gong
Research areas: Trustworthy AI, Data-Driven Security and Privacy, and AI/Data Science

Member: Chicago-area IDEAL Institute

Openings: I'm always looking for highly motivated postdocs, Ph.D. students, visiting scholars/students, and research interns to join my group. If you are interested, please send an email to alanwbh@gmail.com with your CV and transcripts attached.

About Me

I have been an Assistant Professor in the Department of Computer Science at Illinois Institute of Technology (Illinois Tech) since August 2021. I am interested in Trustworthy AI and Security & Privacy.

I did my postdoc at Duke University from August 2019 to July 2021, working with Neil Gong and Yiran Chen. I earned my Ph.D. in May 2019 from Iowa State University, where I was advised by Neil Gong. I received both my M.Sc. (2015) and B.E. (2012) from Dalian University of Technology, China, graduating with the highest honor (Qu Bochuan Scholarship).

I am a recipient of the NSF CAREER Award (2024), NSF CRII Award (2023), Cisco Research Award (2022), and Amazon Research Award (2020). I have also been recognized with the Dean's Excellence in Research (2024) and named among the Global Top 50 Chinese Rising Stars in AI + X by Baidu Scholar (2022). My research has won multiple best paper awards (CCS’24, CVPRW’20) and honorable mention awards (NDSS’19).

Recent News

  • 02/2025: Our work on Deterministic Certification of GNNs against Poisoning Attacks with Arbitrary Perturbations is accepted by CVPR 2025. Congrats to Jiate!

  • 02/2025: Our work on Differentially Private Federated Learning is accepted by CODASPY 2025. Congrats!

  • 01/2025: I am honored to receive the Dean's Excellence in Research Award from the College of Computing at Illinois Tech. I thank the College for recognizing it!

  • 01/2025: Our work on Certified Robust GNNs against Arbitrary Perturbations is accepted by Usenix Security 2025. Congrats to Jiate!

  • 01/2025: Our work on Provably Robust Explainable GNNs is accepted by ICLR 2025. Congrats to Jiate!

  • 01/2025: Our AAAI’25 paper on Adaptive Federated Learning for Parkinson’s Disease Diagnosis is selected for an ORAL presentation. Congrats to All!

  • 12/2024: Congrats to Ben for receiving the Student Scholarship for AAAI 2025. Thanks for the generous support!

  • 12/2024: Our work on Information-Theoretic Robust and Privacy-Preserving Representations Learning is accepted by AAAI 2025. Congrats to Ben and Leily!

  • 12/2024: Our work on Practicable Black-box Attacks on Link Prediction in Dynamic Graphs is accepted by AAAI 2025. Congrats to Jiate!

  • 12/2024: Our work on Adaptive Federated Learning for Parkinson’s Disease Diagnosis is accepted by AAAI 2025. Congrats to All!

  • 11/2024: I am organizing the 3-day Workshop on Harmonious Human-AI Ecosystems scheduled from Nov. 20 to Nov. 22 as part of the Fall 2024 IDEAL Special Program on Interpretability, Privacy, and Fairness.

  • 10/2024: Our work on the Distributed Backdoor Attacks on FedGL and Certified Defenses recieved the CCS’24 Distinguished Paper Award. Congrats to all the co-authors!

  • 09/2024: Our Provably Robust Watermark for FedGL is accepted by NeurIPS 2024. Congrats to Yuxin!

  • 07/2024: Congrats to Leily for receiving the Student Travel Award for USENIX Security 2024. Thanks for the generous support!

  • 07/2024: Our Optimization-based Atttack (breaking SOTA poisoning defenses to federated learning) is accepted by CIKM 2024. Congrats to Yuxin!

  • 07/2024: Our Information Propagation-based Explanation Framework is accepted by CIKM 2024. Congrats to Ruo!

  • 07/2024: Our Certified Defense for Distributed Backdoor Attack on Federated Graph Learning is accepted by CCS 2024. Congrats to Yuxin!

  • 07/2024: Our Certified Black-Box Attack Framework (breaking SOTA defenses with provable confidence and limited resources) is accepted by CCS 2024. Congrats to Hanbin!

  • 07/2024: Our Causal-Explainable GNN via Neural Causal Models is accepted by ECCV 2024. Congrats to Arman!

  • 06/2024: Our IDEAL insititute is organizing the Annual Meeting on June 6th and Industry Day on June 7th. You are welcome to register for the event here for free.

  • 06/2024: Our Knowledge Poisoning Attack on the Retrieval-Augmented Generation of LLMs is accepted by Usenix Security 2025. Congrats to All!

  • 05/2024: Arman starts his research internship on Causal Explanation and Causal Representation Learning at Mayo Clinic in Summer 2024. Congrats!

  • 05/2024: Leily starts her research internship on Data Analytics at CCC Intelligent Solutions in Summer 2024. Congrats!

  • 05/2024: Our paper on Understanding the Robustness of GNN Explainers is accepted by ICML 2024. Congrats to Jiate!

  • 04/2024: Jane receives the Best Presentation Award for Ph.D. Research at 2024 ECE Day - Student Research Competition. Congrats!

  • 02/2024: Our Information-Theoretic Privacy-Preserving Representation Learning Framework against Inference Attacks is accepted by Usenix Security 2024 (Fall Cycle). Congrats to Leily and Ben!

  • 02/2024: My proposal on Trustworthy Machine Learning Meets Information Theory recevies the NSF CAREER Award. Thank NSF for the generous support!

  • 01/2024: Our Deterministic Certification of GNNs against Adversarial Perturbations is accepted for an ORAL presentation in ICLR 2024. Congrats to All!

  • 12/2023: Our Privacy-Preserving Federated Learning against Attribute Inference Attacks is accepted by AAAI 2024. Congrats to Caridad and Leily!

  • 08/2023: Our proposal on Learning Evolving Graphs At Scale is funded by the NSF CCF SHF program. Thank NSF for the generous support!

  • 07/2023: Our Generalized Certified Robustness against Textual Adversarial Attacks is accepted by IEEE SP 2024 (Spring Cycle). Congrats to All!

  • 07/2023: Our Power Side Channel based DNN Model Architectures Stealing is accepted by IEEE SP 2024 (Spring Cycle). Congrats to All!

Research Areas and Recent/Selected Publications

Trustworthy LLM/GenAI

Security Attacks

Trustworthy Representation Learning

  • [AAAI’25a] Robust and Privacy-Preserving Representation Learning

  • [Security’24] Privacy-Preserving Representations Learning against Inference Attacks

  • [AAAI’24] Privacy-Preserving Representation Learning for FL Against Attribute Inference Attacks

  • [KDD’21a] Privacy-Preserving Representation Learning on Graphs

Trustworthy (Explainable) Graph Learning

Security and Privacy Attacks

  • [AAAI’25b] Black-box Attacks on Dynamic GNNs for Link Prediction

  • [ICML’24] Black/Gray-box Attacks on Explainable GNNs

  • [CVPR’23a] Certified Robustness Inspired Attack Framework on GNNs

  • [CVPR’22] Score-based Black-box Attacks on GNNs with Guaranteed Performance

  • [CCS’21] Hard-label Black-box Attacks on GNNs

  • [SACMAT’21] Backdoor Attacks on GNNs

  • [CCS’19] Attacking Graph-based Collective Classification

Security and Privacy Defenses

  • [CVPR’25] Provably Robust GNNs against against Poisoning Attacks with Arbitrary Perturbations

  • [Security’25b] Provably Robust GNNs against Arbitrary Perturbations with Deterministic Guarantees

  • [ICLR’25] Provably Robust Explainable GNNs against Graph Perturbations with Deterministic Guarantees

  • [CCS’24a] Provably Robust FedGL against Distributed Backdoor Attacks with Deterministic Guarantees

  • [ICLR’24] Provably Robust GNNs against Graph Perturbations with Deterministic Guarantees

  • [KDD’21b] Provably Robust GNNs against Graph Perturbations with Probabilistic Guarantees

  • [WWW’20] Provably Robust Community Detection against Graph Perturbations with Probabilistic Guarantees

  • [KDD’21a] Privacy-Preserving Representation Learning on Graphs

Trustworthy Federated Learning

Security and Privacy Attacks

  • [CCS’24a] Distributed Backdoor Attacks on FedGL

  • [CIKM’24a] Optimization-based Attack on SOTA Poisoning Defenses to FL

  • [CVPR’21] Privacy Leakage in FL from Representation Perspective

Security and Privacy Defenses

  • [CCS’24a] Provably Robust FedGL against Distributed Backdoor Attacks

  • [CODASPY’25] Universally Harmonizing DP Mechanisms for FL: Boosting Accuracy and Convergence

  • [NeurIPS’24] Provably Robust Watermarking for FedGL

  • [AAAI’24] Privacy-Preserving FL Against Attribute Inference Attacks

  • [CVPR’21] Privacy-Preserving FL against Data Reconstruction Attacks

Trustworthy Deep Learning

Security and Privacy Attacks

  • [CCS’24b] Certifiable Black-Box Attacks with Randomized Adversarial Examples

  • [EuroSP’23] Certified Radius-Guided Attack Framework to Image Segmentation Models

  • [NeurIPS’20] Perturbing Feature Hierarchy to Improve Blackbox Attack Transferability

  • [SP’24b] Stealing DNN Model Architectures through Power Side Channel

  • [AsiaCCS’21] Robust and Verifiable Information Embedding Attacks to DNNs

  • [SP’18] Stealing Hyperparameters in Machine Learning

Security and Privacy Defenses

  • [SP’24a] Generalized Certified Robustness against Textual Adversarial Attacks

  • [ECCV’22] Universally Approximated Certified Robustness

  • [ICLR’22] Certified Robustness of Top-k Predictions against L0-Perturbation

  • [ICLR’20] Certified Robustness for Top-k Predictions against L2-Perturbation

  • [CVPRW’20] Certified Robustness against Backdoor Attacks

  • [AAAI’25a] Learning Robust and Privacy-Preserving Representations

  • [Security’24] Learning Privacy-Preserving Representations against Inference Attacks

  • [ACSAC’22] Lightweight Neuron-Guided Defense against MIAs

Graph-based Machine Learning for Security and Privacy

Graph-based ML for Security

  • [KDD’21c] An Unsupervised Approach for Unveiling Fake Accounts

  • [ACSAC’21] Detecting Growing-Up Behaviors of Malicious Accounts

  • [NDSS’19] Graph-based Security and Privacy Analytics

  • [RAID’18] Sybil Detection in Social Web Services without Manual Labels

  • [ICDM’17] Sybil Detection in Directed Social Networks via Guilt-by-Association

  • [DSN’17] Random Walk based Fake Account Detection in Online Social Networks

  • [INFOCOM’17] Sybil Detection in Online Social Networks via Local Rule based Propagation

Graph-based ML for Privacy

  • [TDSC’25] Personalized Photo-Sharing and Automatic Deletion in Social Networks

  • [AsiaCCS’22] A Graph-based Cross-Device Tracking Framework

  • [NDSS’19] Graph-based Security and Privacy Analytics

  • [WWW’17] Inferring User Attributes in Online Social Networks Using Markov Random Fields

Explainable Machine Learning

  • [ECCV’24] GNN Causal Explanation via Neural Causal Models

  • [CIKM’24b] An Information Propagation Approach for Improving DNN Model Explanations

  • [CVPR’23b] A Framework to Eliminate Explanation Noise from Integrated Gradients

Other Topics

Federated Learning (FL) & Graph Learning (GL)

  • [AAAI’25c] An Adaptive FL Approach for Privacy-Preserving Facial Expression Analysis

  • [ICDM’22] GraphFL: A FL Framework for Semi-Supervised Node Classification on Graphs

  • [AAAI’21] Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. GNNs

Prototype Learning & Single Sample per Person (SSPP)

  • [TNNLS’24] Heterogeneous Prototype Learning From Contaminated Faces

  • [TNNLS’23] Disentangling Prototype and Variation from SSPP

  • [CIKM’22] Cross-domain Prototype Learning from Contaminated Faces

  • [TIFS’22] Bidirectional Prototype Learning from Contaminated Faces across Heterogeneous Domains

  • [TIFS’21] Joint Prototype and Representation Learning from Contaminated SSPP

  • [TIFS’20] Synergistic Generic Learning for Face Recognition from a Contaminated SSPP

  • [PR’19] Robust Heterogeneous Discriminative Analysis for Face Recognition with SSPP

Myoelectric Control & Brain-Computer Interface

  • [JNE’18] Robust Extraction of Basis Functions for Simultaneous and Proportional myoelectric control

  • [TNSRE’16] Online Detection of Movement-Related Cortical Potentials

Teaching

  • Introduction to Machine Learning (CS 484): Fall 2024

  • Trustworthy Machine Learning (CS 595): Spring 2024, Fall 2022

  • Machine Learning (CS 584): Fall 2023, Spring 2022

  • Data Security and Privacy (CS 528): Spring 2023

Professional Services

Conference/Workshop Organizer

Proposal Review and Panelists

  • National Science Foundation (NSF)

  • Research Grants Council (RGC) of Hong Kong

Conference Program Committee

  • IEEE Symposium on Security and Privacy (IEEE SP), 2026

  • ACM Conference on Computer and Communications Security (CCS), 2022-

  • Neural Information Processing Systems (NeurIPS), 2021-

  • International Conference on Machine Learning (ICML), 2021-

  • International Conference on Learning Representations (ICLR), 2021-

  • Computer Vision and Pattern Recognition (CVPR), 2021-

  • International Conference on Computer Vision (ICCV), 2021-

  • European Conference on Computer Vision (ECCV), 2022-

  • AAAI Conference on Artificial Intelligence (AAAI), 2021-

  • International Joint Conference on Artificial Intelligence (IJCAI), 2021-

  • ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022-

  • Design Automation Conference (DAC), 2022

  • IEEE Symposium on Security and Privacy (IEEE S & P) , 2019 (Student PC)

Journal Reviewer

  • Journal of Machine Learning Research (JMLR)

  • IEEE Transactions on Knowledge and Data Engineering (TKDE)

  • IEEE Transactions on Neural Networks and Learning Systems (TNNLS)

  • IEEE Transactions on Information Forensics and Security (TIFS)

  • IEEE Transactions on Dependable and Secure Computing (TDSC)

  • Computer & Security

  • ACM Transactions on Privacy and Security (TOPS)

  • ACM Computing Surveys (CSUR)

  • IEEE Transactions on Network Science and Engineering (TNSE)

  • IEEE Transactions on Wireless Communications (TWC)

  • IEEE Transactions on Biomedical Engineering (TBME)