TL;DR
Staff Backend Developer (AI): Designing and implementing scalable backend systems for an AI-powered language learning platform with an accent on high-performance services, real-time audio/text processing, and low-latency performance. Focus on architecting fault-tolerant microservices, optimizing system performance, and integrating seamlessly with AI/ML and client applications.
Location: Hybrid in Tel Aviv
Company
hirify.global is revolutionizing education by building the first personal AI English tutor to break down language barriers.
What you will do
- Design, develop, and maintain high-performance backend services and APIs (REST, gRPC) that power AI-driven conversational experiences.
- Build and optimize asynchronous Python applications capable of handling real-time audio/text processing at scale.
- Ensure seamless, low-latency integration between mobile and web clients and the AI backend platform.
- Drive technical decisions on system architecture, focusing on low latency and fault-tolerance.
- Collaborate with AI/ML, mobile, DevOps, and product teams to deliver end-to-end solutions.
- Establish engineering best practices and mentor team members to build a world-class engineering culture.
Requirements
- Minimum of 6 years of diverse Python development experience.
- Deep expertise in Python concurrency and execution models (WSGI, ASGI, asyncio, multiprocessing, threading, GIL).
- Proven track record building production-ready asynchronous Python applications serving high-volume traffic.
- Strong experience with Pydantic and FastAPI in production environments.
- Expertise in designing and implementing RESTful APIs and gRPC services with a strong emphasis on versioning and backward/forward compatibility.
- Demonstrated ability to solve complex performance, scalability, and workload distribution challenges.
- Proficiency with relational and non-relational database solutions (PostgreSQL, Redis, DynamoDB, MongoDB), including query optimization and data modeling.
- Experience with event-driven architectures and message queuing systems (Kafka, RabbitMQ, AWS SQS).
- Hands-on experience with Docker, understanding of CI/CD pipelines and methodologies, and working in Kubernetes/AWS-based deployments.
- Strong proficiency with AWS cloud services and cloud-native architectures.
Nice to have
- Proficiency in additional backend programming languages (Go, Rust, Java, Scala, Kotlin).
- Experience with audio/video streaming protocols and real-time communication systems.
- Familiarity with LLM integration libraries (LiteLLM, LangChain, Guidance, Instructor) and AI model serving frameworks.
- Background in building data-driven applications, pipelines, or ETL processes using frameworks like Apache Spark / Flink and orchestration tools such as Airflow or Dagster.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →