|
Binghui (Alan) Wang
Openings: I'm always looking for highly motivated Ph.D. students, visiting scholars/students, and research interns to join my group. If you are interested, please send an email to alanwbh@gmail.com with your CV and transcripts attached.
About Me
I have been an Assistant Professor in the Department of Computer Science at Illinois Institute of Technology (Illinois Tech) since August 2021. I am interested in Trustworthy AI and Security & Privacy.
I did my postdoc at Duke University from August 2019 to July 2021, working with Neil Gong and Yiran Chen. I earned my Ph.D. in May 2019 from Iowa State University, where I was advised by Neil Gong.
I received both my M.Sc. (2015) and B.E. (2012) from Dalian University of Technology, China, graduating with the highest honor (Qu Bochuan Scholarship).
I am a recipient of the NSF CAREER (2024) and CRII (2023) Awards, as well as research awards from Cisco (2022) and Amazon (2020). My work has received best paper awards at CCS’24 and CVPRW’20 and an honorable mention at NDSS’19. I was named one of Baidu Scholar’s Global Top 50 Chinese Rising Stars in AI + X (2022), received the Dean’s Excellence in Research Award (2024), and was selected for Stanford University’s World’s Top 2% Scientists List (2024, 2025).
Recent News
Recent TPC: S&P, USENIX Security, CCS, ICML, ICLR, NeurIPS
12/2025: I am elevated to the rank of IEEE Senior Member.
09/2025: Our work on certified robustness with universal asymmetric randomization is accepted by IEEE CSF 2026 (a prestigious venue for computer security foundation). Congrats!
09/2025: Our work on Anti-Causal Representation Learning is accepted by NeurIPS 2025. Congrats to Arman!
09/2025: Our work on Ensemble Conformal Prediction against Data Poisoning Attacks is accepted by IEEE SP 2026 (Cycle 1). Congrats to Yuxin!
08/2025: Our proposal on Building a Holistic System for Privately Sharing Naturalistic Driving Data is funded by the new
NSF PDaSP (Track 2) program. Very grateful to NSF and FHWA/DOT for the generous support!
07/2025: Congrats to Jiate for receiving the Student Travel Award for USENIX Security’25. Thanks for the generous support!
06/2025: Our work on Rectifying the Privacy and Efficacy Measurements for Machine Unlearning (RULI) is accepted by Usenix Security 2025. Congrats!
02/2025: Our work on Deterministic Certification of GNNs against Poisoning Attacks with Arbitrary Perturbations is accepted by CVPR 2025. Congrats to Jiate!
02/2025: Our work on Differentially Private Federated Learning is accepted by CODASPY 2025. Congrats!
01/2025: I am honored to receive the Dean's Excellence in Research Award from the College of Computing at Illinois Tech. I thank the College for recognizing it!
01/2025: Our work on Certified Robust GNNs against Arbitrary Perturbations is accepted by Usenix Security 2025. Congrats to Jiate!
01/2025: Our work on Provably Robust Explainable GNNs is accepted by ICLR 2025. Congrats to Jiate!
01/2025: Our AAAI’25 paper on Adaptive Federated Learning for Parkinson’s Disease Diagnosis is selected for an ORAL presentation. Congrats to All!
12/2024: Congrats to Ben for receiving the Student Scholarship for AAAI 2025. Thanks for the generous support!
12/2024: Our work on Information-Theoretic Robust and Privacy-Preserving Representations Learning is accepted by AAAI 2025. Congrats to Ben and Leily!
12/2024: Our work on Practicable Black-box Attacks on Link Prediction in Dynamic Graphs is accepted by AAAI 2025. Congrats to Jiate!
12/2024: Our work on Adaptive Federated Learning for Parkinson’s Disease Diagnosis is accepted by AAAI 2025. Congrats to All!
11/2024: I am organizing the 3-day Workshop on Harmonious Human-AI Ecosystems scheduled from Nov. 20 to Nov. 22 as part of the Fall 2024 IDEAL Special Program on Interpretability, Privacy, and Fairness.
10/2024: Our work on the Distributed Backdoor Attacks on FedGL and Certified Defenses recieved the CCS’24 Distinguished Paper Award. Congrats to all the co-authors!
09/2024: Our Provably Robust Watermark for FedGL is accepted by NeurIPS 2024. Congrats to Yuxin!
07/2024: Congrats to Leily for receiving the Student Travel Award for USENIX Security 2024. Thanks for the generous support!
07/2024: Our Optimization-based Atttack (breaking SOTA poisoning defenses to federated learning) is accepted by CIKM 2024. Congrats to Yuxin!
07/2024: Our Information Propagation-based Explanation Framework is accepted by CIKM 2024. Congrats to Ruo!
07/2024: Our Certified Defense for Distributed Backdoor Attack on Federated Graph Learning is accepted by CCS 2024. Congrats to Yuxin!
07/2024: Our Certified Black-Box Attack Framework (breaking SOTA defenses with provable confidence and limited resources) is accepted by CCS 2024. Congrats to Hanbin!
07/2024: Our Causal-Explainable GNN via Neural Causal Models is accepted by ECCV 2024. Congrats to Arman!
06/2024: Our IDEAL insititute is organizing the Annual Meeting on June 6th and Industry Day on June 7th. You are welcome to register for the event here for free.
06/2024: Our Knowledge Poisoning Attack on the Retrieval-Augmented Generation of LLMs is accepted by Usenix Security 2025. Congrats to All!
05/2024: Arman starts his research internship on Causal Explanation and Causal Representation Learning at Mayo Clinic in Summer 2024. Congrats!
05/2024: Leily starts her research internship on Data Analytics at CCC Intelligent Solutions in Summer 2024. Congrats!
05/2024: Our paper on Understanding the Robustness of GNN Explainers is accepted by ICML 2024. Congrats to Jiate!
04/2024: Jane receives the Best Presentation Award for Ph.D. Research at 2024 ECE Day - Student Research Competition. Congrats!
02/2024: Our Information-Theoretic Privacy-Preserving Representation Learning Framework against Inference Attacks is accepted by Usenix Security 2024 (Fall Cycle). Congrats to Leily and Ben!
02/2024: My proposal on Trustworthy Machine Learning Meets Information Theory recevies the NSF CAREER Award. Thank NSF for the generous support!
01/2024: Our Deterministic Certification of GNNs against Adversarial Perturbations is accepted for an ORAL presentation in ICLR 2024. Congrats to All!
12/2023: Our Privacy-Preserving Federated Learning against Attribute Inference Attacks is accepted by AAAI 2024.
Congrats to Caridad and Leily!
08/2023: Our proposal on Learning Evolving Graphs At Scale is funded by the NSF CCF SHF program. Thank NSF for the generous support!
07/2023: Our Generalized Certified Robustness against Textual Adversarial Attacks is accepted by IEEE SP 2024 (Spring Cycle). Congrats to All!
07/2023: Our Power Side Channel based DNN Model Architectures Stealing is accepted by IEEE SP 2024 (Spring Cycle). Congrats to All!
Students
Ph.D. Student
Tanvir Ahmed Khan (He/Him), Spring 2026
Yunkai Zhao (He/Him) (Co-advised with Dr. Yanxue Jia), Spring 2026
Romina Omidi (She/Her), Fall 2024
Jiawen Wang (She/Her), Fall 2024
Haoran Dai (He/Him), Fall 2024
Yuxin Yang (She/Her) [SP’26, TOPS’25, CCS’24 Best paper, NeurIPS’24, CIKM’24], Fall 2023
Arman Behnam (He/Him) [NeurIPS’25, ECCV’24], Spring 2023
Ph.D. Alumni
Master Student
Samin Bin Karim (He/Him), Spring 2024
Sayedeh Leila Noorbakhsh (She/Her) [AAAI’25, Security’24, AAAI’24], Spring 2023 - Summer 2025
Luis Mares De La Cruz (He/Him) [DLSP’25], Spring 2022 (Now at SDG GROUP, Chicago)
Caridad Arroyo Arevalo (She/Her) [AAAI’24], Spring 2022 (Now at SDG GROUP, Chicago)
Research Intern
Teaching
Introduction to Machine Learning (CS 484): Spring 2026, Fall 2024
Trustworthy Machine Learning (CS 595): Spring 2024, Fall 2022
Machine Learning (CS 584): Fall 2023, Spring 2022
Data Security and Privacy (CS 528): Fall 2025, Spring 2023
Research Areas and Recent/Selected Publications
Trustworthy LLM/GenAI
Security Attacks
Trustworthy Representation Learning
[AAAI’25a] Robust and Privacy-Preserving Representation Learning
[Security’24] Privacy-Preserving Representations Learning against Inference Attacks
[AAAI’24] Privacy-Preserving Representation Learning for FL Against Attribute Inference Attacks
[KDD’21a] Privacy-Preserving Representation Learning on Graphs
Trustworthy (Explainable) Graph Learning
Security and Privacy Attacks
[AAAI’25b] Black-box Attacks on Dynamic GNNs for Link Prediction
[ICML’24] Black/Gray-box Attacks on Explainable GNNs
[CVPR’23a] Certified Robustness Inspired Attack Framework on GNNs
[CVPR’22] Score-based Black-box Attacks on GNNs with Guaranteed Performance
[CCS’21] Hard-label Black-box Attacks on GNNs
[SACMAT’21] Backdoor Attacks on GNNs
[CCS’19] Attacking Graph-based Collective Classification
Security and Privacy Defenses
[CVPR’25] Provably Robust GNNs against against Poisoning Attacks with Arbitrary Perturbations
[Security’25b] Provably Robust GNNs against Arbitrary Perturbations with Deterministic Guarantees
[ICLR’25] Provably Robust Explainable GNNs against Graph Perturbations with Deterministic Guarantees
[CCS’24a] Provably Robust FedGL against Distributed Backdoor Attacks with Deterministic Guarantees
[ICLR’24] Provably Robust GNNs against Graph Perturbations with Deterministic Guarantees
[KDD’21b] Provably Robust GNNs against Graph Perturbations with Probabilistic Guarantees
[WWW’20] Provably Robust Community Detection against Graph Perturbations with Probabilistic Guarantees
Trustworthy Federated Learning
Security and Privacy Attacks
Security and Privacy Defenses
[CODASPY’25] Universally Harmonizing DP Mechanisms for FL: Boosting Accuracy and Convergence
[NeurIPS’24] Provably Robust Watermarking for FedGL
[AAAI’24] Privacy-Preserving FL Against Attribute Inference Attacks
[CVPR’21] Privacy-Preserving FL against Data Reconstruction Attacks
Trustworthy Deep Learning
Security and Privacy Attacks
[Security’25c]] Rectifying the Privacy and Efficacy Measurements for Machine Unlearning
[CCS’24b] Certifiable Black-Box Attacks with Randomized Adversarial Examples
[EuroSP’23] Certified Radius-Guided Attack Framework to Image Segmentation Models
[NeurIPS’20] Perturbing Feature Hierarchy to Improve Blackbox Attack Transferability
[SP’24b] Stealing DNN Model Architectures through Power Side Channel
[AsiaCCS’21] Robust and Verifiable Information Embedding Attacks to DNNs
[SP’18] Stealing Hyperparameters in Machine Learning
Security and Privacy Defenses
[SP’24a] Generalized Certified Robustness against Textual Adversarial Attacks
[ECCV’22] Universally Approximated Certified Robustness
[ICLR’22] Certified Robustness of Top-k Predictions against L0-Perturbation
[ICLR’20] Certified Robustness for Top-k Predictions against L2-Perturbation
[CVPRW’20] Certified Robustness against Backdoor Attacks
[AAAI’25a] Learning Robust and Privacy-Preserving Representations
[Security’24] Learning Privacy-Preserving Representations against Inference Attacks
[ACSAC’22] Lightweight Neuron-Guided Defense against MIAs
Graph-based Machine Learning for Security and Privacy
Graph-based ML for Security
[KDD’21c] An Unsupervised Approach for Unveiling Fake Accounts
[ACSAC’21] Detecting Growing-Up Behaviors of Malicious Accounts
[NDSS’19] Graph-based Security and Privacy Analytics
[RAID’18] Sybil Detection in Social Web Services without Manual Labels
[ICDM’17] Sybil Detection in Directed Social Networks via Guilt-by-Association
[DSN’17] Random Walk based Fake Account Detection in Online Social Networks
[INFOCOM’17] Sybil Detection in Online Social Networks via Local Rule based Propagation
Graph-based ML for Privacy
[TDSC’25] Personalized Photo-Sharing and Automatic Deletion in Social Networks
[AsiaCCS’22] A Graph-based Cross-Device Tracking Framework
[NDSS’19] Graph-based Security and Privacy Analytics
[WWW’17] Inferring User Attributes in Online Social Networks Using Markov Random Fields
Explainable Machine Learning
[ECCV’24] GNN Causal Explanation via Neural Causal Models
[CIKM’24b] An Information Propagation Approach for Improving DNN Model Explanations
[CVPR’23b] A Framework to Eliminate Explanation Noise from Integrated Gradients
Other Topics
Federated Learning (FL) & Graph Learning (GL)
[AAAI’25c] An Adaptive FL Approach for Privacy-Preserving Facial Expression Analysis
[ICDM’22] GraphFL: A FL Framework for Semi-Supervised Node Classification on Graphs
[AAAI’21] Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. GNNs
Prototype Learning & Single Sample per Person (SSPP)
[TNNLS’24] Heterogeneous Prototype Learning From Contaminated Faces
[TNNLS’23] Disentangling Prototype and Variation from SSPP
[CIKM’22] Cross-domain Prototype Learning from Contaminated Faces
[TIFS’22] Bidirectional Prototype Learning from Contaminated Faces across Heterogeneous Domains
[TIFS’21] Joint Prototype and Representation Learning from Contaminated SSPP
[TIFS’20] Synergistic Generic Learning for Face Recognition from a Contaminated SSPP
[PR’19] Robust Heterogeneous Discriminative Analysis for Face Recognition with SSPP
Myoelectric Control & Brain-Computer Interface
|