TL;DR
Senior Big Data Engineer (Data Science/ML): Design, build, and maintain scalable data pipelines and ETL/ELT workflows using Python, PySpark, dbt, and AWS services with an accent on data ingestion, transformation, orchestration, and infrastructure automation. Focus on developing high-quality dbt models, implementing CI/CD, and ensuring data quality monitoring in a cross-functional agile environment.
Location: Remote, service region Eastern Europe
Company
hirify.global is a global digital product engineering company with 18,000+ experts across 39 countries, focused on building inspiring products, services, and experiences at scale.
What you will do
- Design, build, and maintain scalable data pipelines using Python/PySpark and AWS services.
- Develop, test, and deploy dbt models with unit tests and clear documentation.
- Build and integrate data connectors (FTP, API, JDBC, etc.).
- Implement data ingestion, transformation, and orchestration workflows using AWS Glue, AppFlow, Lake Formation, Transfer Family, MWAA, or Argo.
- Use AWS CDK for infrastructure-as-code and deployment automation.
- Apply best practices for CI/CD, version control, testing, and data quality monitoring.
Requirements
- Service region: Eastern Europe
- Strong hands-on experience with dbt, Python, and/or PySpark for ETL/ELT development.
- Expertise in building connectors via FTP, API, JDBC, or other integration patterns.
- Solid experience with AWS data services and workflow orchestration tools.
- Experience with Git, CI/CD tools, and automated testing practices.
- Good communication skills and ability to work in cross-functional agile teams.
Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →