| What to Expect The AI inference co-design team's goal is to take research models and make them run efficiently on our AI-ASIC to power real-time inference for Autopilot and Optimus programs. This unique role lies at the intersection of AI research, compiler development, kernel optimization, math and HW design. You will work extensively with AI engineers and come up with novel techniques to quantize models, improve precision and explore non-standard alternate architectures. You will be developing optimized micro kernels using a cutting-edge MLIR compiler and solve the performance bottlenecks needed to achieve real-time latency needed for self-driving and humanoid robots. You will work closely with the HW team and bring state-of-the-art HW architecture techniques to our next generation HW SoCs. What You'll Do
 
  Research and implement state-of-the-art machine learning techniques to achieve high performance on our edge hardwareOptimize bottlenecks in the inference flow, make precision/performance tradeoff decisions and figure out novel techniques to improve hardware utilization and throughputImplement/improve highly performant micro kernels for Tesla's AI ASICWork with AI teams to design edge friendly neural network architecturesCollect extensive performance benchmarks (latency, throughput, power) and work with HW teams to shape the next generation of inference hardware, balancing performance with versatilityExperiment with numerical methods and alternative architecturesCollaborate with the compiler infrastructure for programmability and performanceWhat You'll Bring
 
  Degree in Engineering, Computer Science or equivalent in experience and evidence of exceptional abilityProficiency with Python and C++, including modern C++ (14/17/20)Experience with AI networks, such as CNNs, transformers, and diffusion model architectures, and their performance characteristicsUnderstanding of GPU, SIMD, multithreading and/or other accelerators with vectorized instructionsExposure to computer architecture and chip architecture/micro-architectureSpecialized experience in one or more of the following machine learning/deep learning domains: Model compression, hardware aware model optimizations, hardware accelerators architecture, GPU/ASIC architecture, machine learning compilers, high performance computing, performance optimizations, numerics and SW/HW co-designCompensation and Benefits
 Benefits Along with competitive pay, as a full-time Tesla employee, you are eligible for the following benefits at day 1 of hire: 
   Aetna PPO and HSA plans > 2 medical plan options with $0 payroll deduction Family-building, fertility, adoption and surrogacy benefits Dental (including orthodontic coverage) and vision plans, both have options with a $0 paycheck contribution Company Paid (Health Savings Account) HSA Contribution when enrolled in the High Deductible Aetna medical plan with HSA Healthcare and Dependent Care Flexible Spending Accounts (FSA) 401(k) with employer match, Employee Stock Purchase Plans, and other financial benefits Company paid Basic Life, AD&D, short-term and long-term disability insurance Employee Assistance Program Sick and Vacation time (Flex time for salary positions), and Paid Holidays Back-up childcare and parenting support resources Voluntary benefits to include: critical illness, hospital indemnity, accident insurance, theft & legal services, and pet insurance Weight Loss and Tobacco Cessation Programs Tesla Babies program Commuter benefits Employee discounts and perks programExpected Compensation $132,000 - $330,000/annual salary + cash and stock awards + benefits
  Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. |