Yue Cheng

Associate Professor at the University of Virginia

YueCheng.png

mrz7dp@virginia.edu

SDS,CS@UVA

Data Systems Researcher

I am an Associate Professor of Data Science and Computer Science at the University of Virginia. My research covers a range of topics including distributed systems, serverless and cloud computing, storage systems, operating systems, and high-performance computing. My current research focuses on designing scalable, high-performance, and easy-to-use computer systems that manage and process huge volume of data.

Currently I am working on: (1) Serverless and FaaS: improving serverless computing using a end-to-end approach that cuts across the entire software-hardware stack: (stateful) applications, middleware, platforms, and lower-level OS/HW; (2) Data Reduction: rethinking data reduction techniques for large data-intensive applications; (3) Sys4ML: building better (computing and storage) systems for (distributed) ML applications; and (4) ML4Sys: improving systems software and infrastructure management by using learned or data-driven approaches.

I am the recipient of an NSF CAREER Award (2021), an Amazon Research Award (2021), a Meta Research Award (2022), the IEEE CS TCHPC Early Career Researchers Award for Excellence in HPC (2022), and a Samsung GRO 2023 Award (2023). Prior to joining UVA, I was an Assistant Professor of Computer Science at George Mason University, from 2017 to 2022. I received my Ph.D. degree in Computer Science from Virginia Tech, working with Dr. Ali R. Butt. During my Ph.D. I spent two summers at IBM Research Almaden in 2013 and 2014, and six months at Dell EMC Princeton Office in 2015.

selected projects

Most of my projects are open-source and available on our group’s GitHub page.

  • InfiniStore: Storing large and small objects on a dynamic fleet of serverless functions with only 3% of ElastiCache’s cost but without sacrificing performance and availability.
    [ASPLOS’23]: [GitHub] – [VLDB’23]: [GitHub] – [FAST’20]: [GitHub]

  • Wukong: Scaling out Python parallel programs (e.g., Dask applications) on FaaS without worrying about tedious cluster management. Wukong uses a new decentralized scheduling technique, which decentralizes resource orchestration to each individual serverless function, thereby enabling high elasticity and high scalability.
    [SoCC’20] [PDSW’19]: [GitHub]

  • FaaSNet: A highly scalable container provisioning framework that can provision thousands of 10+GB serverless function containers with just a few seconds. FaaSNet is currently deployed at Alibaba Function Compute.
    [ATC’21]: [GitHub] [Alibaba Cloud Blog]

  • SFS: Linux CFS is not ideal for short-lived serverless function workloads. SFS instead optimizes the turnaround time for transient function jobs.
    [SC’22]: [GitHub]

  • SHADE: A common practice in deep learning training is to randomly shuffle all training samples epoch by epoch. With SHADE, you can cache the most important training samples without losing training quality.
    [FAST’23]: [GitHub]

  • DIGEST: Scaling GNN training using a disaggregated storage.
    [arXiv]: [code]

news

Apr 2024 Excited to receive an NSF OAC Core grant on building a distributed graph learning cyberinfrastructure for large spatiotemporal prediction (w/ Liang Zhao from Emory). Thanks, NSF!
Mar 2024 Congrats to Ruizhe on the IPFS analysis work accepted to SIGMETRICS 2024! We answered questions about accessibility, content, and performance of IPFS in this research.
Mar 2024 Congrats to Zhaoyuan and Zirui on their work accepted to VLDB 2024! In this work, Zhaoyuan analyzed a large dataset of real-world pre-trained ML models collected from Hugging Face. Based on the analysis study, he designed a new storage compression method for reducing the storage requirement of pre-trained models at scale.
Feb 2024 Congrats to Rui on his work accepted to VLDB 2024! In this work, Rui systematically studied the algorithmic complexity vulnerabilities of dynamic learned indexes.
Jan 2024 Check our latest survey on resource-efficient LLMs.
Oct 2023 :trophy: Excited to receive a Samsung GRO 2023 Award on New Storage for Large ML Training (w/ Ali Anwar from UMN). Thanks, Samsung Advanced Institute of Technology and Samsung Memory Solutions Lab, for the generous support on our research!
Oct 2023 Serving as the general co-chair of ACM HotStorage’24. Consider submitting your exciting early ideas!
Jun 2023 🎓 My first Ph.D. student Jingyuan Zhang successfully defended his Ph.D. dissertation. Congratulations, Dr. Zhang! Jingyuan will be joining the cloud-native infrastructure team @ ByteDance (San Jose, CA).
Apr 2023 Congrats to Ben, Runzhou, and Jingyuan on the acceptance of λFS to ASPLOS 2023! The acceptance of λFS at ASPLOS’23 marks yet another significant milestone of our serverless storage project series. Don’t forget to check out our projects: Episode I - InfiniCache, Episode II - InfiniStore, and our latest work, Episode III - λFS.
Feb 2023 Congrats to Jingyuan, Ben, and the team on the acceptance of InfiniStore to VLDB 2023!
Dec 2022 Congrats to Redwan, Ahmad, and Yuqi on their paper on deep learning I/O caching accepted to FAST 2023!
Sep 2022 :trophy: I am honored to be selected for the 2022 IEEE CS TCHPC Early Career Researchers Award for Excellence in High Performance Computing.
Sep 2022 Congrats to Zhaoyuan on his paper accepted to DRBSD-8 co-located with SC 2022!
Sep 2022 :trophy: Excited to receive a Meta Research Award for AI System Hardware/Software Codesign. Thanks, Meta Research!
Aug 2022 In Fall ‘22, I am joining the School of Data Science and the Department of Computer Science at the University of Virginia.
Jul 2022 :medal_sports: SFS is nominated as a Best Student Paper Award Finalist at SC 2022! Congrats to Yuqi!
Jun 2022 Congrats to Yuqi on his paper on serverless function scheduling accepted to SC 2022!
May 2022 This summer my students will intern at MSR (Ben Carver), ByteDance (Yuqi Fu, Jingyuan Zhang), and Argonne National Lab (Zhaoyuan Su)! Congrats!
May 2022 🏆 Thrilled to receive an Outstanding Teaching Award from CS @ Mason!
Aug 2021 Congrats to Li and Haoliang on rKube accepted to SoCC 2021!
Aug 2021 A collaborative FMSG grant funded by NSF (with Jia Liu @ Auburn). Thanks, NSF!
Jun 2021 Congrats to Zheng on FedAT accepted to SC 2021!
Apr 2021 Congrats to Ao on FaaSNet accepted to USENIX ATC 2021!
Mar 2021 Honored to receive a gift from Adobe Research for our work on serverless computing! Thanks, Adobe!
Feb 2021 Thrilled to receive an NSF CAREER Award for my work on building serverless cloud storage infrastructure. Thanks, NSF!
Oct 2020 Excited to receive an Amazon Research Award with Liang Zhao from Emory!
Aug 2020 Congrats to Junxiang and Zheng on their paper getting accepted to IEEE ICDM 2020!
Aug 2020 Congrats to Ben, Jingyuan, and Ao on Wukong getting accepted by ACM SoCC 2020! Wukong is a super-fast serverless parallel computing framework built atop AWS Lambda. Wukong achieves up to 68X speedup over state-of-the-art serverless parallel processing frameworks. Wukong project is online. We are happy to accept contributions!
Jul 2020 Two projects got funded by NSF. With the new MRI grant, we will be building a new HPC infrastructure to support the growing computing needs for Mason users. With an OAC grant, we will be building a new model parallel deep learning training infrastructure. Thanks NSF!
Mar 2020 Congrats to Zheng, Ahsan, and Syed on TiFL getting accepted to ACM HPDC 2020!
Dec 2019 Congrats to Ao, Jingyuan, and Xiaolong on InfiniCache getting accepted to USENIX FAST 2020! InfiniCache is a first-of-its-kind, cost-effective, object cache that is built atop ephemeral cloud funtions. InfiniCache is 31-96x cheaper than existing cloud cache services (e.g., AWS ElastiCache) while offering same or better performance. Fork InfiniCache on GitHub.

selected/recent publications

  1. SIGMETRICS’24
    A Closer Look into IPFS: Accessibility, Content, and Performance
    Ruizhe Shi, Ruizhi Cheng, Bo Han,  Yue Cheng,  and Songqing Chen
    In ACM SIGMETRICS / IFIP Performance 2024
  2. VLDB’24
    Everything You Always Wanted to Know About Storage Compressibility of Pre-Trained ML Models but Were Afraid to Ask
    Zhaoyuan Su, Ammar Ahmed, Zirui Wang, Ali Anwar,  and Yue Cheng
    In 50th International Conference on Very Large Data Bases 2024
  3. VLDB’24
    Algorithmic Complexity Attacks on Dynamic Learned Indexes
    Rui Yang, Evgenios M. Kornaropoulos,  and Yue Cheng
    In 50th International Conference on Very Large Data Bases 2024
  4. arXiv
    Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models
    Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang,  Yue Cheng,  and Liang Zhao
    In 2024
  5. ASPLOS’23
    λFS: A Scalable and Elastic Distributed File System Metadata Service using Serverless Functions
    Benjamin Carver, Runzhou Han, Jingyuan Zhang, Mai Zheng,  and Yue Cheng
    In 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems 2023
  6. VLDB’23
    InfiniStore: Elastic Serverless Cloud Storage
    Jingyuan Zhang, Ao Wang, Xiaolong Ma, Benjamin Carver, Nicholas John Newman, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng Yan,  and Yue Cheng
    In 49th International Conference on Very Large Data Bases 2023
  7. USENIX FAST’23
    SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training
    Redwan Ibne Seraj Khan, Ahmad Hossein Yazdani, Yuqi Fu, Arnab K. Paul, Bo Ji, Xun Jian,  Yue Cheng,  and Ali R. Butt
    In 21th USENIX Conference on File and Storage Technologies (FAST 23) 2023
  8. SC’22
    SFS: Smart OS Scheduling for Serverless Functions
    Yuqi Fu, Li Liu, Haoliang Wang,  Yue Cheng,  and Songqing Chen
    In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 2022
  9. USENIX ATC’21
    FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute
    Ao Wang, Shuai Chang, Huangshi Tian, Hongqi Wang, Haoran Yang, Huiba Li, Rui Du,  and Yue Cheng
    In 2021 USENIX Annual Technical Conference (USENIX ATC 21) 2021
  10. SoCC’20
    Wukong: A Scalable and Locality-Enhanced Framework for Serverless Parallel Computing
    Benjamin Carver, Jingyuan Zhang, Ao Wang, Ali Anwar, Panruo Wu,  and Yue Cheng
    In Proceedings of the 11th ACM Symposium on Cloud Computing 2020
  11. HPDC’20
    TiFL: A Tier-Based Federated Learning System
    Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan,  and Yue Cheng
    In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing 2020
  12. USENIX FAST’20
    InfiniCache: Exploiting Ephemeral Serverless Functions to Build a Cost-Effective Memory Cache
    Ao Wang, Jingyuan Zhang, Xiaolong Ma, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng Yan,  and Yue Cheng
    In 18th USENIX Conference on File and Storage Technologies (FAST 20) 2020
  13. USENIX FAST’18
    Improving Docker Registry Design Based on Production Workload Analysis
    Ali Anwar, Mohamed Mohamed, Vasily Tarasov, Michael Littley, Lukas Rupprecht,  Yue Cheng, Nannan Zhao, Dimitrios Skourtis, Amit S. Warke, Heiko Ludwig, Dean Hildebrand,  and Ali R. Butt
    In 16th USENIX Conference on File and Storage Technologies (FAST 18) 2018
  14. USENIX ATC’16
    Erasing Belady’s Limitations: In Search of Flash Cache Offline Optimality
    Yue Cheng, Fred Douglis, Philip Shilane, Grant Wallace, Peter Desnoyers,  and Kai Li
    In 2016 USENIX Annual Technical Conference (USENIX ATC 16) 2016