Publications
Note: * indicates co-first author.
- Chunlin Tian, Xinpeng Qin*, Li Li, “GreenLLM: Towards Efficient Large Language Model via Energy-Aware Pruning” *(IWQoS Poster 2024, Accepted) (CCF-B)
- Chunlin Tian, Zhan shi, Xinpeng Qin, Li Li,Cheng-zhong Xu, “Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning” (ICML 2024, Accepted) (CCF-A)
- Chunlin Tian, Xinpeng Qin*, Li Li et al., “AutoPruner: Enable Efficient Generative Large Language Models Adaptive Pruning”, *(SIGKDD 2025, Under Review) (CCF-A)
- Chunlin Tian, Xinpeng Qin*, Li Li,Cheng-zhong Xu et al., “EdgeLLM: Automating LLMs Porting for On-Device Inference at the Edge”, *(HPCA 2025, Under Review) (CCF-A)
- Chunlin Tian, Shuaihang Zhong, KaHou Tam, Xinpeng Qin, Li Li,Cheng-zhong Xu et al., “FedProxy: Federated Fine-tuning of inaccessible LLMs via Heterogeneous Proxy Models on the Edge”, (NSDI 2025, Under Review) (CCF-A)