We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Principal Software Engineer - Azure AI Inferencing

Microsoft
$139,900.00 - $274,800.00 / yr
United States, Washington, Redmond
Jan 18, 2026
Overview

MicrosoftAzureAIInferenceplatform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AIInferenceplatform to accelerate the research, development, and operations of AI powered intelligent solutions at scale.This teamowns thehosting,optimization,and scalingthe inferencestackforalltheAzure AIFoundarymodelsincluding the latest and greatest from OpenAI,Grok, DeepSeek,andother OSS models.

Do you want to join a team entrusted with serving all internal and external ML workloads,solvereal worldinferenceproblemsforstate-of-the-artlargelanguage(LLM)andmulti-modalGenAImodelsfromOpenAIand othermodel providers? We are already serving billions ofinferencesper day on the mostcutting-edgeAIscenarios across theindustry. You will be joining the CoreAI Inferencing team, influencing the overall product, driving new featuresand platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI andCloud.

We'relooking for aPrincipal Software Engineer - Azure AI Inferencingto drive the design, optimization, and scaling of our inference systems. In this role,you'lllead engineering efforts to ensure our largest models run with exceptional efficiency in high-throughput, low-latency environments.You will get to workonand influencemultiple levels of the AI Inferencedata plane stack.

We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers arebetter served.

Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.



Responsibilities
  • Lead the design and implementation of core inference infrastructure for serving frontier AI models in production.
  • Identifyand drive improvements to end-to-end inference performanceand efficiencyof OpenAI and otherstate-of-the-artLLMs.
  • Lead the design and implementation ofefficientload scheduling andbalancing strategies,byleveragingkey insightsand features of the model and workload.
  • Scale the platformto support the growinginferencingdemandandmaintainhigh availability.
  • Deliver critical capabilitiesrequiredto serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them.
  • Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies.
  • Collaboratewith our partners both internal and external.
  • Mentor engineers on distributed inference best practices.
  • Embody Microsoft'sCultureandValues.


Qualifications

Required/Minimum Qualifications

  • Bachelor's degree inComputerScienceor related technical field AND6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, orGolang
    • OR equivalent experience.

Other Requirements:

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred/Additional Qualifications

  • 4+ years' practical experience working on high scale, reliable online systems.
  • Technical background and foundation in software engineering principles, distributed computing and architecture.
  • Experience in real-time online services with low latency and high throughput.
  • Experience working with L7 network proxies and gateways.
  • Knowledge in Network architecture and concepts (HTTP and TCP Protocols, Authentication and Sessions etc).
  • Knowledge and experience in OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages.
  • Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.
  • Ability to independently lead projects.


Software Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Applied = 0

(web-df9ddb7dc-zsbmm)