TL;DR
Staff Data Engineer (Data Platform): Responsible for building and maintaining key parts of the foundation data services platform, including next-generation reporting and analytics, using cloud technologies across Amazon Web Services and Snowflake. Focus on continually improving data pipelines for high efficiency, throughput, and data quality, and solving sophisticated problems with passion and technical leadership.
Location: This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager.
Salary: zł278,000.00-zł418,000.00
Company
hirify.global software was built to bring a sense of calm to the chaotic world of customer service.
What you will do
- Publish well-written and tested code to production.
- Participate in designing and developing key features and functionality of our data platform.
- Investigate production issues and fine-tune our data pipelines.
- Continually improve data pipelines for high efficiency, throughput, and quality of data.
- Build a platform that will be the foundation for our customer-facing reporting features, our machine learning initiatives, and internal product analytics.
- Collaborate with team members on researching and brainstorming different solutions for technical challenges we face.
Requirements
- 4+ years of software development/data engineering experience with 3+ years of hands-on experience building scalable data platforms and/or reliable data pipelines.
- Experience working with Snowflake and DBT.
- Proficient in at least one of Java, Python, or Scala.
- Developer skills demonstrating a strong passion for designing scalable and fault-tolerant software systems.
- Experience with AWS or related cloud technologies.
- Experience in developing and operating high volume, high availability environments.
- Working understanding of Kubernetes’ infrastructure and security best practices.
- Ability to work optimally in a multifaceted, occasionally interrupt-driven environment that includes geographically spread teams and customers.
- Solid working understanding of Data Modeling.
- Proven track record with cloud and data-related technologies.
Nice to have
- Experience writing ETL jobs to help address various data engineering challenges.
- Strong understanding of Build tools and Deployment tools.
- Familiarity with Kafka, Flink, Spark frameworks with validated understanding of at least one job scheduling tool: Airflow, Celery, AWS Step functions.
Culture & Benefits
- Hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration.
- Flexibility to work remotely for part of the week.
- Equal opportunity employer.
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →