Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Scalable and Efficient MoE Training for Multitask Multilingual Models., , , , , , , , and . CoRR, (2021)Fast LSTM Inference by Dynamic Decomposition on Cloud Systems., , , , , , and . ICDM, page 748-757. IEEE, (2019)ZeRO-Offload: Democratizing Billion-Scale Model Training., , , , , , , and . USENIX Annual Technical Conference, page 551-564. USENIX Association, (2021)SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement., , , , , and . NeurIPS, page 20531-20544. (2021)Optimizing the Four-Index Integral Transform Using Data Movement Lower Bounds Analysis., , , , and . PPoPP, page 327-340. ACM, (2017)Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model., , , , , , , , , and 10 other author(s). CoRR, (2022)DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention., , , , , , , , and . CoRR, (2023)ZeRO: memory optimizations toward training trillion parameter models., , , and . SC, page 20. IEEE/ACM, (2020)1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed., , , , , , , , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 10118-10129. PMLR, (2021)Accelerating Large Scale Deep Learning Inference through DeepCPU at Microsoft., , , , , , , , and . OpML, page 5-7. USENIX Association, (2019)