NVIDIA is leading company of AI computing. At NVIDIA, our employees are passionate about AI, HPC , VISUAL, GAMING. Our Solution Architect team is more focusing to bring NVIDIA new technology into difference industries. We help to design the architecture of AI computing platform, analyze the AI and HPC applications to deliver our value to customers. This role will be instrumental in leveraging NVIDIA's cutting-edge technologies to optimize open-source and proprietary large models, create AI workflows, and support our customers in implementing advanced AI solutions.What you’ll be doing:Drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutionsUse NVIDIA NIM Factory Pipeline to package optimized models (including LLM, VLM, Retriever, CV, OCR, etc.) into containers providing standardized API accessRefine NIM tools for the community, help the community to build their performant NIMsDesign and implement agentic AI tailored to customer business scenarios using NIMsDeliver technical projects, demos and customer support tasksProvide technical support and guidance to customers, facilitating the adoption and implementation of NVIDIA technologies and productsCollaborate with cross-functional teams to enhance and expand our AI solutionsWhat we need to see:Pursuing Bachelor or Master in Computer Science, AI, or a related field; Or PhD candidates in ML Infra or data systems for ML.Proficiency in at least one inference framework (e.g., TensorRT, ONNX Runtime, PyTorch)Strong programming skills in Python or C++Excellent problem-solving skills and ability to troubleshoot complex technical issuesDemonstrated ability to collaborate effectively across diverse, global teams, adapting communication styles while maintaining clear, constructive professional interactionsWays to stand out from the crowd:Expertise in model optimization techniques, particularly using TensorRTFamiliarity with disaggregated LLM InferenceCUDA optimization experience, extensive experience designing and deploying large scale HPC and enterprise computing systemsFamiliarity with main stream inference engines (e.g., vLLM, SGLang)Experience with DevOps/MLOps such as Docker, Git, and CI/CD practices