Ke Jiang (蒋珂)
News
- From 14 April, 2025: I started working as a visiting researcher at Machine Learning & Systems Laboratory, Graduate School of Information Science and Technology. My advisor here is Professor Yoshinobu Kawahara.
- 21 March, 2025: Our paper RoGA: Towards Generalizable Deepfake Detection through Robust Gradient Alignment has been accepted by IEEE International Conference on Multimedia & Expo (ICME) 2025 (Oral).
- 25 September, 2023: Our paper Recovering from out-of-sample states via inverse dynamics in offline reinforcement learning has been accepted by The 36th Annual Conference on Neural Information Processing Systems (NeurIPS), 2023.
Education
- (B.Sc.) 2015.9-2019.6, School of computer science, Nanjing University of Information Science and Technology.
- (M.Sc.) 2019.9-2022.4, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Advisor: Prof. Xiaoyang Tan.
- (Ph.d. student) 2022.4-, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Advisor: Prof. Xiaoyang Tan.
Research Interests
- (Robust & Generalizable & Safe & Offline) Reinforcement learning
- Generative models for long-horizon planning
- Cross-domain classification (Videos & Image)
Publications
- Qiu L, Jiang K, Tan X. RoGA: Towards Generalizable Deepfake Detection through Robust Gradient Alignment. IEEE International Conference on Multimedia & Expo (ICME), 2025.
- Qiu L, Jiang K, Tan X. Multi-level Distributional Discrepancy Enhancement for Cross Domain Face Forgery Detection. Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 2024, 508-522.
- Jiang K, Yao J, Tan X. Recovering from out-of-sample states via inverse dynamics in offline reinforcement learning. Advances in Neural Information Processing Systems (NeurIPS), 2023, 36.
- Shen J, Jiang K, Tax X. Boundary Data Augmentation for Offline Reinforcement Learning. ZTE Communications, 2023, 21(3): 29.
Project Experiecne
- A Research on Offline Reinforcement Learning Methods and Theories on Complex Real-world Scenarios (National Natural Science Foundation of China, No.6247072715, Leader: Xiaoyang Tan).
Preprints & Under Review (†:Equal Contribution)
- Jiang K, Jiang W, Yao L, Tan X. Beyond Non-Expert Demonstrations: Outcome-Driven Action Constraint for Offline Reinforcement Learning.
- †Jiang K, †Jiang W, Tan X. Variational OOD State Correction for Offline Reinforcement Learning.
- †Wang Z, †Jiang K, Tan X. Calibrating Diffuser for Long-horizon Planning in Offline RL.
- †Qiu L, †Jiang K, Tan X. Contrastive Desensitization Learning for Cross Domain Face Forgery Detection.
- Jiang K, Tan X. Towards *** Control (under double-blind review).
Talks
- Application of Koopman Theory in Generalizable Offline Reinforcement Learning. (Jan, 2025, Personal Siminar, Machine Learning & Systems Laboratory, Osaka University, Japan)
- Offline reinforcement learning from non-expert data via state-supported boostrapping. (Nov, 2024, A3 Foresight Program, Beijing, China)
Fundings
- April 2025 - October 2025, Short Visit Program, Nanjing University of Aeronautics and Astronautics (No.241206DF16).
Academic & Working Activities
- Reviewer of international conferences, including NeurIPS, ICLR, ICME.
- Teaching Assistant of “Machine learning and its applications 2023” (by Professor Xiaoyang Tan) at Nanjing University of Aeronautics and Astronautics.
Hobbies
Fitness, Food, Traveling