Эта вакансия в архиве
Посмотреть похожие вакансии ↓обновлено 2 месяца назад
Senior Data Engineer (GCP)
Описание вакансии
Текст:
TL;DR
Senior Data Engineer (GCP): Designing, building, and supporting scalable data platforms within a cloud-first environment with an accent on developing modern streaming and batch data pipelines. Focus on enabling advanced analytics, implementing lakehouse architectures, and managing large-scale datasets using distributed processing technologies.
Location: Must be based in Phoenix, AZ (On-site)
Company
provides workforce management and staffing services across various industries.
What you will do
- Design and maintain scalable data pipelines using Spark, Kafka, and cloud-native services.
- Build real-time data streaming solutions leveraging Kafka, Flink, and Spark Streaming.
- Implement and optimize data lakehouse architectures to support enterprise-level analytics.
- Develop robust data workflows using Python, PySpark, SQL, and Airflow/Cloud Composer.
- Ensure data governance, security, and best practices across the data ecosystem.
- Collaborate with cross-functional teams to deliver high-quality data solutions in an Agile environment.
Requirements
- Must be authorized to work in the US (W2 only, no C2C).
- 5+ years of data engineering experience with Hadoop and Google Cloud.
- 3+ years of experience designing and implementing data lakehouse architectures.
- 2+ years of experience with Kafka, Flink, and Spark Streaming.
- Strong proficiency in Python, PySpark, and SQL.
- Hands-on experience with GCP (BigQuery, Dataproc, Cloud Storage).
Nice to have
- GCP Professional Data Engineer or other relevant cloud certifications.
- Experience supporting both batch and real-time analytics workloads.
- Familiarity with distributed systems design patterns.
Culture & Benefits
- Visa sponsorship available for qualified candidates.
- Long-term project engagement (12+ months).
- Collaborative, engineering-focused team environment.