Search by job, company or skills

micron semiconductor asia operations pte. ltd.

Member of Technical Staff (MTS), Machine Learning, SMAI

10-12 Years
SGD 13,000 - 20,000 per month
new job description bg glownew job description bg glownew job description bg svg
  • Posted 15 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

The Smart Manufacturing and AI team at Micron Technology is looking for an ambitious Machine Learning Engineer (Member of Technical Staff | MTS).

Our mission is to deliver industry-winning machine learning, custom GenAI, and Agentic AI solutions to power Micron's dominance in the highly competitive memory solutions market. Qualified applicants will have experience in a variety of data and cloud technologies and have extensive practice modeling data, querying, and deploying scalable data pipelines to execute machine learning models and AI agents. You will collaborate with Data Scientists, Data Engineers, and expert users to build and deploy scalable AI/ML solutions that drive value and insight from Micron's manufacturing processes and systems.

Responsibilities include, but not limited to:

  • Architect and execute large-scale custom model training and fine-tuning jobs (SFT, RLHF) on multi-node, multi-GPU clusters.

  • Optimize training throughput and memory efficiency using distributed training strategies (FSDP, DeepSpeed, Megatron-LM) and mixed-precision techniques (FP16/BF16).

  • Design and develop autonomous AI Agents capable of multi-step reasoning, planning, and tool execution to automate complex manufacturing workflows.

  • Implement Agentic frameworks (e.g., LangChain, LangGraph, CrewAI) to orchestrate LLM interactions with internal APIs, databases, and software tools.

  • Profile and debug GPU performance bottlenecks using tools like Nsight Systems or PyTorch Profiler to maximize hardware utilization.

  • Build and maintain data/solution pipelines that feed machine learning models and GenAI applications.

  • Design and optimize data structures in data management systems (Snowflake, and Google Cloud platforms) to enable AI/ML and Agentic solutions.

  • Create/Maintain CI/CD pipelines of machine learning and AI Agent solutions in the cloud.

Education Qualifications:

  • Technical Degree required. Computer Science or Statistics background highly desired.

Minimum Qualifications:

  • Deep understanding of GPU architecture (memory hierarchy, tensor cores, interconnects like NVLink) and experience managing GPU resources in both cloud environments and on-prem.

  • Hands-on experience with Distributed Data Parallel (DDP), Fully Sharded Data Parallel (FSDP), and model parallelism techniques.

  • Proficiency in fine-tuning Large Language Models using PEFT techniques (LoRA, QLoRA) and optimizing inference engines (vLLM, TensorRT-LLM).

  • Experience developing GenAI applications and AI Agents using frameworks like LangChain, LangGraph, LlamaIndex, or AutoGen.

  • Proficiency with Large Language Models (LLMs), including prompt engineering, function calling/tool use, and Chain-of-Thought (CoT) reasoning.

  • Experience in building and executing end-to-end ML systems automating training, testing and deploying Machine Learning models.

  • Familiarity with machine learning frameworks (PyTorch is required, TensorFlow, scikit-learn, etc.).

  • Software development skills and the desire to work on cutting edge development in a Cloud environment.

  • Strong scripting and programming skills in one of the following, Python or Java (Python preferred).

  • Experience with continuous integration/continuous delivery (CI/CD) tools (Jenkins, Git, Docker, Kubernetes).

  • 10+ years building scalable ETL pipelines.

  • 10+ years of experience with big data processing and/or developing applications and data sources.

  • Outstanding analytical thinking, interpersonal, oral and written communication skills.

  • Ability to prioritize and meet critical project timelines in a fast-paced environment.

Preferred:

  • Experience with HPC job schedulers (e.g., Slurm) or orchestrating GPU workloads on Kubernetes (Ray, KubeFlow).

  • Knowledge of lower-level optimization (CUDA programming, Triton kernels, or custom C++ extensions for PyTorch).

  • Experience with Multi-Agent Systems and orchestrating collaboration between specialized agents.

  • Deep knowledge of math, probability, statistics and algorithms.

  • Demonstrated ability to study and transform data science prototypes into production solutions.

  • Knowledge of computer vision and/or signal processing including techniques for classification and feature extraction.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 145615977

Similar Jobs