Zhihan Jiang

The Chinese University of Hong Kong Ph.D. Candidate, CUHK

I am a final-year CSE Ph.D. candidate in ARISE Lab at The Chinese University of Hong Kong, supervised by Prof. Michael R. Lyu. Previously, I obtained my Bachelor's degree from Sun Yat-sen University, in 2022.
My research is dedicated to enhancing the observability and reliability of large-scale cloud and LLM systems. I am particularly interested in AIOps for LLM systems (Ops4LLM) and LLM-enhanced AIOps (LLM4Ops). My recent work focuses on developing reliable agentic systems, with a specific emphasis on agent-driven DevOps automation.
I am always open to academic discussions and brainstorming. Feel free to drop me an email if you are interested to connect.


Education
  • The Chinese University of Hong Kong

    The Chinese University of Hong Kong Aug. 2022 - present

    Ph.D. in Computer Science and Engineering

    • (TA) ENGG1110 Problem Solving By Programming (Spring & Fall, 2023)
    • (TA) CSCI3160 Design and Analysis of Algorithms (Fall, 2022)
  • Sun Yat-sen University

    Sun Yat-sen University Sep. 2018 - Jul. 2022

    B.S. in Computer Science and Technology

    • GPA: 4.0/4.0, Rank: 1/40; IELTS band: 7.5
Honors & Awards
  • ACM SIGSOFT Distinguished Paper Award, ICSE'26 2026
  • ACM SIGSOFT Distinguished Paper Award, ASE'25 2025
  • IEEE Best Paper Award, CLOUD'24 2024
  • National Scholarship, Ministry of Education P.R.C. 2020 & 2021
  • The First Prize Scholarship, SYSU 2019 - 2021

Academic Service
  • Program Committee : AIWare'26, FORGE'26, APSEC'25, MSR'25, ASE'24-industry
  • Artifact Evaluation Committee: ICSE'26, OSDI'25, ATC'25, ICSE'25, OSDI'24, ATC'24, ISSTA'24
  • Journal Reviewer: TSE, TOSEM, CSUR, EEAI
  • Conference (sub-)Reviewer: FSE'26, ICDE'26-industry, ASE'25, ICSE'25, ISSTA'25, ISSRE'25, FSE'25-industry, FSE'24, ISSRE'24, ICPE'24, ESEC/FSE'23, DSN'23, ISSRE'23
News
2026
🏆 Our PreServe has won ACM SIGSOFT Distinguished Paper Award at ICSE'26 (22/1469).
02 Mar
2025
🏆 Our iKnow has won ACM SIGSOFT Distinguished Paper Award at ASE'25 (21/1190).
03 Nov
🎉 Our PreServe has been accepted by ICSE'26.
16 Oct
🎉 Our LogPilot, iKnow, LogImprover and ErrorPrism have been accepted by ASE'25.
12 Sep
🎉 Our LLMPrism has been accepted by DSN'25.
30 Apr
🎉 Our L4 and LUNAR have been accepted by FSE'25.
02 Apr
2024
🏆 Our TraceMesh has won IEEE Best Paper Award at CLOUD'24 (1/191).   Link
08 Jul
🧐 I will serve on the Program Committee for Industry Track of ASE'24.
17 Jun
🎉 Our Loghub-2.0 has been accepted without revision by ISSTA'24 (42/471).
01 Mar
🎉 Our LILAC and SCLogger have been accepted without revision by FSE'24 (56/483).
22 Jan
2023
🎉 Our Prism has been accepted by ASE'23 (134/629).
07 Aug
2026
PreServe: Intelligent Management for LMaaS Systems via Hierarchical Prediction

Zhihan Jiang, Yujie Huang, Guangba Yu, Junjie Huang, Jiazhen Gu, Michael R. Lyu

ICSE'26 The IEEE/ACM International Conference on Software Engineering. 🏆 ACM SIGSOFT Distinguished Paper Award

Language-Model-as-a-Service (LMaaS) platforms handle millions of daily requests and must meet low-latency, SLO, and efficiency goals, but conventional cloud managers falter under LMaaS’s dynamic, bursty workloads. We introduce a hierarchical-prediction management framework that pairs a coarse-grained service-workload predictor with a fine-grained request-load predictor to build per-instance load anticipators. By fusing long- and short-term forecasts, it proactively auto-scales resources and routes requests based on current and anticipated load, preventing under-/over-provisioning and instance load imbalancing.

ICSE'26
Apr 2026
Rio de Janeiro, Brazil
2025
LogPilot: Intent-aware and Scalable Alert Diagnosis for Large-scale Online Service Systems

Zhihan Jiang, Jinyang Liu, Yichen Li, Haiyu Huang, Xiao He, Tieying Zhang, Jianjun Chen, Yi Li, Rui Shi, Michael R. Lyu

ASE'25 The IEEE/ACM International Conference on Automated Software Engineering.

Effective alert diagnosis is essential for ensuring the reliability of large-scale online service systems. While various automated tools have been proposed, they struggle in practice due to alert-agnostic log scoping and the inability to organize complex data effectively for reasoning. To overcome these limitations, we introduce LogPilot, an intent-aware and scalable log-based framework powered by LLMs for automated alert diagnosis.

ASE'25
Nov 2025
Seoul, South Korea
iKnow: an Intent-Guided Chatbot for Cloud Operations with Retrieval-Augmented Generation

Junjie Huang, Yuedong Zhong, Guangba Yu, Zhihan Jiang, Minzhi Yan, Wenfei Luan, Tianyu Yang, Rui Ren, Michael R. Lyu

ASE'25 The IEEE/ACM International Conference on Automated Software Engineering. 🏆 ACM SIGSOFT Distinguished Paper Award

While the sheer volume of operational documentation required for managing complex cloud services hinders efficient knowledge acquisition, Retrieval-Augmented Generation (RAG) offers a streamlined solution by retrieving relevant knowledge to generate concise, referenced answers. However, deploying a reliable RAG-based chatbot for cloud operation remains a challenge. In this experience paper, we first analyze the development and deployment of RAG-based chatbots for operational question answering (OpsQA) at a large-scale cloud vendor. Base on the findings, we propose iKnow, an intent-guided RAG-based chatbot that integrates intent detection, query rewriting tailored to each intent, and missing knowledge detection to enhance answer quality.

ASE'25
Nov 2025
Seoul, South Korea
Automated Proactive Logging Quality Improvement for Large-Scale Codebases

Yichen Li, Jinyang Liu, Junsong Pu, Zhihan Jiang, Zhuangbin Chen, Xiao He, Tieying Zhang, Jianjun Chen, Yi Li, Rui Shi, Michael R. Lyu

ASE'25 The IEEE/ACM International Conference on Automated Software Engineering.

High-quality logging is critical for the reliability of cloud services, yet the industrial process for improving it is typically manual, reactive, and unscalable. Existing automated tools inherit this reactive nature, failing to answer the crucial whether-to-log question and are constrained to simple logging statement insertion, thus addressing only a fraction of the real-world logging improvement. To address these gaps and cope with logging debt in large-scale codebases, we propose LogImprover, a framework powered by LLMs that automates proactive logging quality improvement and introduce two paradigm shifts: from reactive generation to proactive discovery, and from simple insertion to holistic logging patch generation.

ASE'25
Nov 2025
Seoul, South Korea
ErrorPrism: Reconstructing Error Propagation Paths in Cloud Service Systems

Junsong Pu, Yichen Li, Zhuangbin Chen, Jinyang Liu, Zhihan Jiang, Jianjun Chen, Rui Shi, Zibin Zheng, Tieying Zhang

ASE'25 The IEEE/ACM International Conference on Automated Software Engineering.

Reliability management in cloud service systems is challenging due to the cascading effect of failures. Error wrapping, a practice prevalent in modern microservice development, enriches errors with context at each layer of the function call stack, constructing an error chain that describes a failure from its technical origin to its business impact. However, this also presents a significant traceability problem when recovering the complete error propagation path from the final log message back to its source. Existing approaches are ineffective at addressing this problem. To fill this gap, we present ErrorPrism for automated reconstruction of error propagation paths in production microservice systems by integrating static analysis and an LLM agent.

ASE'25
Nov 2025
Seoul, South Korea
LLMPrism: Black-box Performance Diagnosis for Production LLM Training Platforms

Zhihan Jiang, Rui Ren, Guangba Yu, Yulun Wu, Wenwei Gu, Yichen Li, Yujie Huang, Cong Feng, Zengyin Yang, Yongqiang Yang, Michael R. Lyu

DSN'25 The IEEE/IFIP International Conference on Dependable Systems and Networks.

Multi-tenant large-scale LLM training platforms have been built to offer LLM training services, while performance issues occur frequently and can result in substantial resource wastage. The limited visibility from the perspective of platform providers impedes existing profiling methods and poses challenges to the performance monitoring and diagnosis of LLM training jobs. This paper proposes LLMPrism, the first black-box performance diagnosis solution for LLM training platforms by utilizing underlying network flow data and the distinct characteristics in the LLM training procedure. By progressively recognizing LLM training jobs, identifying their parallelism strategies, and reconstructing the training timelines, LLMPrism achieves non-intrusive, lightweight, and continuous monitoring of LLM training systems.

DSN'25
Apr 2025
Naples, Italy
No More Labelled Examples? An Unsupervised Log Parser with LLMs

Junjie Huang, Zhihan Jiang, Zhuangbin Chen, Michael R. Lyu

FSE'25 The ACM International Conference on the Foundations of Software Engineering.

Log parsing is a critical prerequisite for many log analysis tasks. However, existing language model-based parsers often rely heavily on high-quality labeled examples to perform well, which limits their practicality in real-world scenarios. To overcome this limitation, we propose LUNAR, an unsupervised, LLM-based method for efficient and ready-to-use log parsing, which is based on the key insight that while LLMs struggle with direct log parsing, their performance can be significantly improved through comparative analysis of multiple logs that differ only in their parameter components.

FSE'25
Apr 2025
Trondheim, Norway
L4: Diagnosing Large-scale LLM Training Failures via Automated Log Analysis

Zhihan Jiang, Junjie Huang, Guangba Yu, Zhuangbin Chen, Yichen Li, Renyi Zhong, Cong Feng, Yongqiang Yang, Zengyin Yang, Michael R. Lyu

FSE'25 The ACM International Conference on the Foundations of Software Engineering.

The training process of Large Language Models (LLMs) requires substantial resources, as evidenced by scaling laws, which frequently leads to inevitable failures. In this paper, we present the first empirical study of LLM training failures on our production platform. Leveraging the obtained insights and the distinct cross-job, spatial, and temporal patterns present in LLM training logs, we propose L4, the first log-based large-scale LLM training failure diagnosis framework, which can automatically extract failure-indicating information (i.e., log events, nodes, stages, and iterations) from extensive training logs, thereby reducing manual effort and facilitating failure recovery.

FSE'25
Mar 2025
Trondheim, Norway
COCA: Generative Root Cause Analysis for Distributed Systems with Code Knowledge

Yichen Li, Yulun Wu, Jinyang Liu, Zhihan Jiang, Zhuangbin Chen, Guangba Yu, Michael R. Lyu

ICSE'25 The IEEE/ACM International Conference on Software Engineering.

Automatically identifying the root cause of runtime failures is critical for ensuring the reliability of distributed systems. However, prevailing automatic RCA approaches rely on comprehensive runtime monitoring data, which is often not fully available in issue platforms. To obtain more accurate and comprehensive RCA results, we propose COCA, a code knowledge enhanced RCA approach for issue reports.

ICSE'25
Jan 2025
Ottawa, Canada
2024
Demystifying and Extracting Fault-indicating Information from Logs for Failure Diagnosis

Junjie Huang, Zhihan Jiang, Jinyang Liu, Yintong Huo, Jiazhen Gu, Zhuangbin Chen, Cong Feng, Hui Dong, Zengyin Yang, Michael R. Lyu

ISSRE'24 The International Symposium on Software Reliability Engineering.

Logs are crucial for maintaining online service systems, but manual investigation of logs by engineers is labor-intensive and prone to errors. We find that engineers typically prioritize two categories of log information for diagnosis: fault-indicating descriptions (FID) that highlight abnormal events, and fault-indicating parameters (FIP) that identify associated entities. Motivated by these findings, we propose Log4d, a two-stage approach with novel prompt-based tuning to automatically extract fault-indicating information from logs for fault diagnosis.

ISSRE'24
Oct 2024
Tsukuba, Japan
Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study

Yichen Li, Yintong Huo, Zhihan Jiang, Renyi Zhong, Pinjia He, Yuxin Su, Lionel C. Briand, Michael R. Lyu

TSE'24 IEEE Transactions on Software Engineering.

Despite advancements in natural language generation and programming language comprehension, the potential of large language models (LLMs) for generating logging statements remains unexplored. To fill the gap, we conduct the first study on LLMs for logging statement generation. We create a logging statement generation dataset and evaluate the effectiveness and generalization capabilities of 13 top-performing LLMs. Our empirical analysis reveals the limitations of current logging methods, highlights the promise of LLM-based logging tools, and offers actionable guidance for developing more practical models.

TSE'24
Sep 2024
A Large-scale Evaluation for Log Parsing Techniques: How Far are We?

Zhihan Jiang, Jinyang Liu, Junjie Huang, Yichen Li, Yintong Huo, Jiazhen Gu, Zhuangbin Chen, Jieming Zhu, Michael R. Lyu

ISSTA'24 The ACM SIGSOFT International Symposium on Software Testing and Analysis.

Log parsing is essential for converting unstructured logs into structured data for automated analysis. Evaluating the characteristics and performance of various log parsers is crucial, however, the existing Loghub dataset is limited in scale and representativeness. We introduce Loghub-2.0, comprising 14 datasets with an average of 3.6 million logs each. Based on these datasets, we thoroughly re-evaluate 15 state-of-the-art log parsers in a more rigorous and practical setting, offering valuable insights.

ISSTA'24
Sep 2024
Vienna, Austria
LILAC: Log Parsing using LLMs with Adaptive Parsing Cache

Zhihan Jiang, Jinyang Liu, Zhuangbin Chen, Yichen Li, Junjie Huang, Yintong Huo, Pinjia He, Jiazhen Gu, Michael R. Lyu

FSE'24 The ACM International Conference on the Foundations of Software Engineering.

Log parsing serves as a prerequisite for various log analysis tasks, but the performance of current syntax-based and semantic-based parsers remains unsatisfactory. Leveraging large language models (LLMs) to overcome the limitations of existing log parsers is promising; however, it presents challenges related to specialization, consistency and efficiency. To address these practical issues, we propose LILAC, the first practical Log parsIng framework using LLMs with Adaptive parsing Cache.

FSE'24
Jul 2024
Porto de Galinhas, Brazil
Go Static: Contextualized Logging Statement Generation

Yichen Li, Yintong Huo, Renyi Zhong, Zhihan Jiang, Jinyang Liu, Junjie Huang, Jiazhen Gu, Michael R. Lyu

FSE'24 The ACM International Conference on the Foundations of Software Engineering.

Logging practices have been extensively studied to assist developers in writing logging statements. However, existing automatic logging methods with single-method contexts face three key limitations: limited static scope, inconsistent logging styles, and missing variables type information. To tackle these limitations, we propose SCLogger, the first approach to generate contextualized logging statements using large language models with inter-method static contexts.

FSE'24
Jul 2024
Porto de Galinhas, Brazil
TraceMesh: Scalable and Streaming Sampling for Distributed Traces

Zhuangbin Chen, Zhihan Jiang, Yuxin Su, Michael R. Lyu, Zibin Zheng

CLOUD'24 The IEEE International Conference on Cloud Computing. 🏆 IEEE Best Paper Award

Distributed tracing is a fundamental monitoring tool for cloud systems; however, it typically captures overlapping and redundant information. Existing tail-based trace samplers fall short of considering the high-dimensional and dynamic nature of trace data. To address these practical challenges, we introduce TraceMesh, a scalable and streaming sampler for distributed traces, which adapts to evolving trace features and dynamically samples uncommon traces.

CLOUD'24
Jul 2024
Shenzhen, China
FaultProfIT: Hierarchical Fault Profiling of Incident Tickets in Large-scale Cloud Systems

Junjie Huang, Jinyang Liu, Zhuangbin Chen, Zhihan Jiang, Yichen Li, Jiazhen Gu, Cong Feng, Zengyin Yang, Yongqiang Yang, Michael R. Lyu

ICSE'24 The IEEE/ACM International Conference on Software Engineering, Software Engineering in Practice.

Postmortem analysis is essential for managing cloud system incidents, involving profiling incidents to classify them into unique fault patterns. Current manual approaches are labor-intensive and error-prone, resulting in only the most severe incidents being analyzed, which leads to a skewed fault pattern overview. To address these limitations, we propose an automated approach called FaultProfIT, for Fault Pattern Profiling of Incident Tickets, utilizing hierarchy-guided contrastive learning.

ICSE'24
Apr 2024
Lisbon, Portugal
2023
Prism: Revealing Hidden Functional Clusters of Massive Instances in Cloud Systems

Jinyang Liu*, Zhihan Jiang*, Jiazhen Gu, Junjie Huang, Zhuangbin Chen, Cong Feng, Zengyin Yang, Yongqiang Yang, Michael R. Lyu (* equal contribution)

ASE'23 The IEEE/ACM International Conference on Automated Software Engineering.

To improve observability of large-scale cloud systems, we propose to infer functional clusters, i.e., groups of instances having similar functionalities, to bridge the gap betwwen instance and service layer. Our pilot study demonstrates that instances having similar functionalities share similar communication and resource usage patterns. Motivated by these findings, we propose a non-intrusive solution, Prism, to reveal functional clusters in cloud systems based on communication traces and performance metrics.

ASE'23
Sep 2023
Kirchberg, Luxembourg