We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Member of Technical Staff, HPC Operations Engineering Manager - MAI SuperIntelligence Team

Microsoft
$139,900.00 - $274,800.00 / yr
United States, California, Mountain View
Feb 12, 2026
Overview

Microsoft AI is seeking an experiencedHPC Operations Engineering Managerto join our Infrastructure Team. In this role, you'll lead a team of Site Reliability Engineers who blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You'll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.

Microsoft Superintelligence Team

Microsoft Superintelligence Team's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. This role is part of Microsoft AI's Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence - ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.

Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society - advancing science, education, and global well-being. We're also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you're a brilliant, highly-ambitious and low ego individual, you'll fit right in - come and join us as we work on our next generation of models!

By applying to this Mountain View, CA position, you are required to be local to the San Francisco area and in office 4 days a week.



Responsibilities
  • Team leadership: Lead a team of experienced SREs to ensure uptime, resiliency and fault tolerance of AI model training and inference systems.
  • Observability: Design and help maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.
  • Automation & Tooling: Lead building of automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.
  • Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.
  • Security & Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.
  • Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.


Qualifications

Required Qualifications:

  • Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with Site Reliability Engineering, DevOps, or Infrastructure Engineering Leadership roles AND 8+ years experience with Kubernetes, Docker, and container orchestration, AND8+ years experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code, AND6+ years experience with programming/scripting skills not limited to Python, Go, or Bash
    • OR equivalent experience

Preferred Qualifications:

  • Master's Degree in Computer Science or related technical field AND 12+ years technical engineering experience AND 10+ years experience with Kubernetes, Docker, and container orchestration, AND 10+ years' experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code
    • OR equivalent experience.
  • 6+ years people management experience.
  • Experience in monitoring & observability tools (Grafana, Datadog, OpenTelemetry, etc.)
  • Experience running large-scale GPU clusters for ML/AI workloads
  • Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).
  • Knowledge of CI/CD pipelines for Inference and ML model deployment
  • Solid knowledge ofdistributed systems, networking, and storage
  • Familiarity with ML training/inference pipelines.
  • Background in capacity planning & cost optimization for GPU-heavy environments


Software Engineering M5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Applied = 0

(web-54bd5f4dd9-dz8tw)