I joined the School of Artificial Intelligence at Beijing Normal University (BNU) as an Associate Professor in September 2024. Prior to this, I was a young senior researcher at the Institute of Information Engineering, Chinese Academy of Sciences, from September 2020 to September 2024. I received my Ph.D. from the same institute in July 2020, under the supervision of Professor Yong Liu and Professor Weiping Wang.

Artificial intelligence methods represented by large language models (LLMs) have achieved significant progress at both algorithmic and application levels. However, their intrinsically data-driven statistical nature leads to insufficient reasoning reliability (high hallucination rates), making it challenging to meet the stringent reliability requirements of high-risk decision-making and educational scenarios. My research focuses on the reasoning capabilities and reliability of large language models, aiming to enhance complex reasoning ability and reasoning reliability through fundamental theoretical exploration and key technical investigations, thereby realizing interpretable, trustworthy, and practically deployable LLM reasoning systems. Specific research interests include, but are not limited to:

  • Complex reasoning capabilities of LLMs: Investigating information compression mechanisms, reinforcement learning-driven reasoning optimization, long-context reasoning (Long CoT), and RAG-based tool invocation to enhance reasoning depth and flexibility in complex tasks.

  • Reasoning reliability of LLMs: Studying knowledge-driven reasoning, formal reasoning, and neuro-symbolic integration methods to reduce hallucination rates and improve interpretability and trustworthiness of model reasoning.

  • Educational applications of LLMs: Exploring functionalities such as intelligent question generation, intelligent literature retrieval and recommendation, and lesson plan generation to enable practical deployment of reasoning capabilities and formation of deliverable results in educational settings.

Career

InstitutionTitleTime
Beijing Normal UniversityAssociate Professor2024.09 - present
Microsoft Research Asia (NLC Group)Visiting Scholar2024.04 - 2024.06
Institute of Information Engineering, CASAssociate Research Fellow2023.10 - 2024.09
Institute of Information Engineering, CASPostdoc Researcher2020.09 - 2023.10

Education

InstitutionMajorDegreeTime
University of Chinese Academy of Sciences (UCAS)Cyber SecurityPh.D.2015.09 - 2020.06
Northeastern UniversitySoftware Engineering (International class)Bachelor2011.09 - 2015.06

Selected Papers [Full List] [Google Scholar]

  • A Survey on Model Compression for Large Language Models. [pdf]
    Jian Li, Yong Liu, Weiping Wang.
    Transactions of the Association for Computational Linguistics (TACL), 2024, Aceepted. CCF-B Journal / JCR Q1.

  • Distilling mathematical reasoning capabilities into Small Language Models. [pdf]
    Xunyu Zhu, Jian Li*, Yong Liu, Can Ma, Weiping Wang.
    Neural Networks, 2024. CCF-B Journal / JCR Q1.

  • Optimal Rates for Agnostic Distributed Learning. [pdf] [code]
    Jian Li, Yong Liu, Weiping Wang.
    IEEE Transactions On Information Theory (TIT), 2023. CCF-A Journal.

  • Optimal Convergence Rates for Distributed Nyström Approximation. [pdf] [code]
    Jian Li, Yong Liu, Weiping Wang.
    Journal of Machine Learning Research (JMLR), 2023. CCF-A Journal.

  • Convolutional Spectral Kernel Learning with Generalization Guarantees. [pdf] [code]
    Jian Li, Yong Liu, Weiping Wang.
    Artificial Intelligence (AI), 2022. CCF-A Journal.

  • Optimal Convergence Rates for Agnostic Nyström Kernel Learning. [pdf]
    Jian Li, Yong Liu, Weiping Wang.
    International Conference on Machine Learning (ICML), 2023. CCF-A Conference.

  • Multi-Class Learning: From Theory to Algorithm. [pdf] [poster] [sildes] [3-minute video] [code]
    Jian Li, Yong Liu, Rong Yin, Hua Zhang, Lizhong Ding, Weiping Wang.
    Advances in Neural Information Processing Systems 31 (NeurIPS), 2018. CCF-A Conference.

  • Federated learning for non-iid data: From theory to algorithm. [pdf] [presentation] [🏆Best Student Paper Reward] (1/92)
    Bojian Wei, Jian Li*, Yong Liu, Weiping Wang.
    Pacific Rim International Conference on Artificial Intelligence (PRICAI), 2021. CCF-C Conference.

Projects

  • National Natural Science Foundation of China (No. 62106257), 2026.01 - 2029.12, ¥500,000.
    Information Compression of Large Language Models: Mathematical Mechanism Analysis and Algorithm Design.

  • National Key R&D Program of China (2022YFB3105302.2), 2022.12 - 2025.11, ¥1,200,000.
    Aggregation and Collaborative Techniques for Cross-platforms Heterogenous Data.

  • National Natural Science Foundation of China (No. 62106257), 2022.01 - 2024.12, ¥300,000.
    Large Scale Structured Prediction with Automated Spectral Kernel Learning.

  • China Postdoctoral Science Foundation (Special Support, No. 2023T160680), 2023.07 - 2024.03, ¥180,000.
    Research on Deep Differentiable Gaussian Processes for Structured Prediction.

  • Special Research Assistant Project of CAS, 2020.09 - 2022.09, ¥800,000.
    Large-scale Few-shot Automated Machine Learning.

  • Talent Program Class A of Institute of Information Engineering, CAS, Tenure-track Professor, 2023.10 - 2026.09.

  • Talent Program Class B of Institute of Information Engineering, CAS, Tenure-track Young Professor, 2020.09 - 2023.10.

Patents

Granted

  • Hailun Lin, Yong Liu, Jian Li, Weiping Wang. A Large-Scale Ontology Merging Method that Integrates Representation Learning and Divide-and-Conquer Strategy: China. Granted No.CN110059194A. Granted Date: April 8, 2022.

Pending

  • Jian Li, Yong Liu, Liubin Wang, Yiguo Yang, Juhong Wang.Neural Network Architecture Search Method, Device, Computer Equipment, and Storage Medium. CN:202011567991.3. App. Date: December 25, 2020.
  • Jian Li, Jiaoyang Li, Bojian Wei, Yong Liu, Weiping Wang. A Federated Learning Method and System Based on Attention Mechanism. CN: 202311073645.3. App. Date: August 24, 2023
  • Jian Li, Jiaoyang Li, Zheng Lin, Yong Liu, Weiping Wang. A Vertical Domain Large Model Method and System Based on Knowledge Distillation and Prompt Engineering. CN: 202311073641.5. App. Date: August 24, 2023.

Students

  • Ph.D. students
    • 🎓Yilin Kang (2020.09 - 2023.06 ), Differential Privacy.
      Publications: Computers & Security, CIKM, ICCS.
      Post-graduation: Researcher in Purple Mountain Laboratories.
    • Xunyu Zhu (2020.09 - present), Efficient LLM Inference.
      Publications: Neural Networks $\times$ 2, TACL, ICDM.
      In submissions: TACL.
    • Boxuan Che (2022.09 - present), Efficient Graph Neural Networks.
  • Master students
    • 🎓Bojian Wei (2020.09 - 2022.06), Federated Learning on Heterogenous Data.
      Publications: PRICAI 2021 (best student paper award), TNNLS, ECML-PKDD, IJCNN.
      Post-graduation: Management Trainee in Bank of China Head Office.
    • Xuning Zhang (2022.09 - present), Federated Learning.
      Excellent Bachelor’s Thesis in Wuhan University in 2023.
      In submissions: AAAI 2025.

Honors and Awards

  • Microsoft Research Asia StarTrack Scholars Program, 2024
  • Talent Plan Class A of IIE, CAS, 2023.
  • PRICAI 2021 best student paper award, 2021.
  • Special Research Assistant of Chinese Academy of Sciences, 2020.
  • Talent Plan Class B of IIE, CAS, 2020.
  • AIDU Talents of Baidu Research, 2020.
  • Joint Ph.D. Program with Stanford University (Discontinued due to COVID-19), 2020.02 - 2021.02.
  • Outstanding Graduates of Beijing, 2020.
  • Outstanding Graduates of University of Chinese Academy of Sciences (UCAS), 2020.
  • Outstanding Graduates of Institute of Information Engineering, CAS, 2020.
  • National Scholarship for Doctoral students, 2019.
  • ZhuLiYueHua Scholarship for Excellent Doctoral Student, 2019.
  • CAS Presidential Scholarship, 2019.
  • National Scholarship for Doctoral students, 2018.

Academic Service

  • Mathematics Guest Editor
  • Program committee of Conference: ICML, NeurIPS, ICLR, AAAI, IJCAI, ECAI, etc.
  • Reviewers of Journals: TPAMI, JMLR, Pattern Recognition, etc.