Responsibilities
About the Team The Applied Machine Learning (AML) - Enterprise team provides machine learning platform products on VolcanoEngine with cloud native resource scheduling system which intelligently orchestrates different tasks and jobs with minimised costs of every experiment and maximised resource utilisation, rich modelling tools including customised machine learning tasks and web IDE, and multi-framework high performance model inference services. In 2021, through VolcanoEngine, we released this machine learning infrastructure to the public, to provide more enterprises with reduced costs of computation power, lower barriers to machine learning engineering and deeper developments in AI capabilities. Responsibilities -Responsible for the development and performance optimization of the Volcano Engine large model training and inference systems, including but not limited to model computation optimization, tuning thousand-GPU training clusters, distributed LLM inference systems, and large-scale inference traffic scheduling. -Responsible for solving technical challenges related to high concurrency, high reliability, and high scalability, supporting Volcano Engine's daily training and inference traffic at the scale of hundreds of billions of tokens. -Responsible for researching and introducing forward-looking technical architectures for large model training and inference, including but not limited to subgraph matching, compiler optimization, and model quantization. -Responsible for integrating heterogeneous hardware with training and inference frameworks, including but not limited to GPUs, NPUs, and TPUs. -Focused on improving compute utilization across globally distributed ultra-large-scale GPU clusters through elastic scheduling, GPU oversubscription, task orchestration, and related techniques. -Collaborate closely with algorithm teams to jointly optimize algorithms and systems.
Qualifications
Minimum Qualifications -Proficient in C/C++ and Python development under Linux environments, with experience in large-scale machine learning systems or search, advertising, and recommendation systems. -Familiar with at least one machine learning framework, such as TensorFlow, PyTorch, MXNet, or other in-house frameworks. -Familiar with at least one large model training or inference framework, including but not limited to vLLM, TensorRT-LLM, SGLang, and Megatron-LM. -Strong problem-solving skills with the ability to work independently, excellent teamwork spirit, and outstanding capability in breaking down complex problems. -Strong sense of responsibility, with good learning ability, communication skills, and self-motivation. Preferred Qualifications -Experience in designing architectures for large-scale distributed systems. -Strong understanding of GPU hardware architecture and the GPU software stack (such as CUDA and cuDNN), with experience in GPU performance analysis and optimization. -Research background at the Master's or PhD level in computer systems-related fields, including distributed systems, parallel computing, programming languages and compilers, networking, or storage systems.