Research Intern - AI System Architecture Modeling and Performance
Microsoft | |
United States, Oregon, Hillsboro | |
Jan 19, 2026 | |
|
Overview
Research Internships at Microsoft provide a dynamic environment for research careers with a network of world-class research labs led by globally-recognized scientists and engineers, who pursue innovation in a range of scientific and technical disciplines to help solve complex challenges in diverse fields, including computing, healthcare, economics, and the environment. The Azure Hardware and Systems Infrastructure organization is central to defining Microsoft's first-party Artificial Intelligence (AI) infrastructure architecture and strategy. This is a dynamic and fast-paced environment that in close partnership with sister organizations helps define System on Chip (SoC) designs, interconnect topologies, memory hierarchies, and much more, all in the context of enabling and optimizing workload optimized data flows for large-scale AI models. This organization plays a critical role in roadmap definition all the way from concept to silicon to hyperscale integration. Responsibilities Research Interns put inquiry and theory into practice. Alongside fellow doctoral candidates and some of the world's best researchers, Research Interns learn, collaborate, and network for life. Research Interns not only advance their own careers, but they also contribute to exciting research and development strides. During the 12-week internship, Research Interns are paired with mentors and expected to collaborate with other Research Interns and researchers, present findings, and contribute to the vibrant life of the community. Research internships are available in all areas of research, and are offered year-round, though they typically begin in the summer. As a Research Intern, you will be at the forefront of hardware/software co-design and have a direct impact in answering critical questions around designing an optimized AI system and evaluating real-world impact on the Azure's supporting hyperscale infrastructure. This role will evaluate opportunities to co-optimize central processing unit (CPU), graphics processing unit (GPU) and networking infrastructure for the Maia accelerator ecosystem. You will be expected to identify system stress points, propose novel architectural ideas, and create methodologies using a combination of workload characterization, modeling and benchmarking to evaluate their effectiveness. Qualifications Required Qualifications
Other Requirements
Preferred Qualifications
The base pay range for this internship is USD $6,710 - $13,270 per month. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $8,760 - $14,360 per month. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-intern-pay Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. | |
Jan 19, 2026