Staff Software Engineer (ML Acceleration)
Мэтч & Сопровод
Для мэтча с этой вакансией нужен Plus
Описание вакансии
TL;DR
Staff Software Engineer (ML Acceleration): Accelerating ML training iterations by profiling, optimizing, and fine-tuning models for autonomous trucking systems with an accent on GPU performance, hardware deployment, and scalability. Focus on implementing CUDA kernels, Triton optimizations, and TensorRT/ONNX integrations to balance accuracy and speed.
Pittsburgh, PA or Remote. Position contingent on U.S. person status, citizenship verification, and compliance with U.S. national security/export control regulations.
Company
Stack develops revolutionary AI and autonomous systems to enhance safety, reliability, and efficiency in the trucking transportation industry.
What you will do
- Analyze ML models to identify and resolve performance bottlenecks.
- Incorporate OSS tools for ML engineers to self-profile and optimize models.
- Deliver solutions to streamline model deployment across hardware platforms.
- Collaborate with ML researchers to balance model accuracy and speed.
- Implement optimizations using CUDA, Triton, and custom kernels.
- Promote engineering excellence and best practices across the team.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 5+ years of experience, including GPU programming and optimization.
- Strong programming skills in C++ and Python.
- Proven experience in GPU programming, CUDA, Triton for GPU kernels.
- Familiarity with PyTorch, TensorRT, ONNX model conversion/deployment, custom GPU kernels.
- Deep understanding of GPU architectures and performance optimization.
- Strong analytical/problem-solving skills and communication abilities.
Nice to have
- Experience with autonomous vehicles (AV).
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →