Ctrl+K
Назад
Эта вакансия старше 7 дней и может быть неактуальной.
Чтобы не пропустить новые вакансии и откликаться в числе первых, подпишитесь на уведомления в Telegram
Orca Security
1 month ago
Big Data Engineer
Описание вакансии
Привет!
Я Дарья, сорсер в ОнЗэСпот. Мы сейчас находимся в поиске специалистов для нашего заказчика. Мог бы ты, пожалуйста, опубликовать пост?
#вакансия #vacancy #job #senior #hybrid #poland #warsaw #bigdataengineer
Vacancy: Big Data Engineer
Location: Warsaw
Format: Hybrid
Type of Contract: B2B
Orca Security is a leading cloud infrastructure security platform. If you like working with cutting-edge tech and solving real security challenges, this might be your perfect match!
Key Responsibilities:
- Design, develop, and maintain scalable and robust data pipelines for processing large datasets
- Optimize ETL/ELT workflows to ensure high performance, scalability, and efficiency
- Work with structured and unstructured data from multiple sources (e.g., logs, events, databases, APIs, and streams)
Requirements:
- 5+ years of experience in designing and developing data pipelines for big data processing
- Expertise in Python, Scala, or Java for data engineering tasks
- Proficiency with big data technologies like Apache Spark, Flink, Kafka, or Hadoop
- Experience with stream processing frameworks (Kafka Streams, Apache Flink, or Spark Streaming)
- Experience working with cloud platforms such as AWS, GCP, or Azure (e.g., S3, Athena, Redshift, BigQuery, Databricks)
📩 For inquiries, contact me on Telegram:
Я Дарья, сорсер в ОнЗэСпот. Мы сейчас находимся в поиске специалистов для нашего заказчика. Мог бы ты, пожалуйста, опубликовать пост?
#вакансия #vacancy #job #senior #hybrid #poland #warsaw #bigdataengineer
Vacancy: Big Data Engineer
Location: Warsaw
Format: Hybrid
Type of Contract: B2B
Orca Security is a leading cloud infrastructure security platform. If you like working with cutting-edge tech and solving real security challenges, this might be your perfect match!
Key Responsibilities:
- Design, develop, and maintain scalable and robust data pipelines for processing large datasets
- Optimize ETL/ELT workflows to ensure high performance, scalability, and efficiency
- Work with structured and unstructured data from multiple sources (e.g., logs, events, databases, APIs, and streams)
Requirements:
- 5+ years of experience in designing and developing data pipelines for big data processing
- Expertise in Python, Scala, or Java for data engineering tasks
- Proficiency with big data technologies like Apache Spark, Flink, Kafka, or Hadoop
- Experience with stream processing frameworks (Kafka Streams, Apache Flink, or Spark Streaming)
- Experience working with cloud platforms such as AWS, GCP, or Azure (e.g., S3, Athena, Redshift, BigQuery, Databricks)
📩 For inquiries, contact me on Telegram:
Источник - Data jobs feed