Назад
11 часов назад

Senior Data Engineer

156 000 - 187 000$
Формат работы
remote (только United_states/Canada/Europe)
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
US/Canada
vacancy_detail.hirify_telegram_tooltipВакансия из Telegram канала -

Мэтч & Сопровод

Покажет вашу совместимость и напишет письмо

Описание вакансии

Senior Data Engineer

Company

Consensys

Conditions

18 hours agoSeniorSalary: 156K - 187KUnited States, Canada, LATAM, EMEA (Remote) Remote Full Time Data Science Jobs by Consensys

Skills

Data Quality Redshift Metadata Management Cube Trusted Execution Environment Superset Preset Airbyte Datahub Cube.Dev Bigquery Segment Airflow Snowflake Dbt Data Model S3 Emr Github Actions Terraform Data Pipeline Dagster Infrastructure-As-Code Ci/Cd Sql Python Apache Spark Data Governance Pulumi Etl

About the Role

You will design, build, and maintain robust data pipelines that integrate sources across the business. You will collaborate with analysts, stakeholders, and engineering teams to gather requirements and deliver reliable data solutions. You will document pipelines and processes, develop and optimize data models, ensure data quality and governance, orchestrate and monitor pipeline execution, deploy and manage infrastructure as code, and automate CI/CD and reporting workflows to enable scalable analytics.

Requirements

  • Over 6 years of experience as a Data Engineer
  • Experience using Trusted Execution Environments (TEEs) to securely process sensitive user data
  • Strong SQL skills
  • Experience with cloud data warehouses such as Snowflake BigQuery or Redshift
  • Hands-on experience with transformation and orchestration tools such as dbt Airflow or Dagster
  • Proficiency with Python or other scripting languages for ETL and automation
  • Familiarity with data governance and metadata management such as DataHub
  • Experience deploying and managing infrastructure as code such as Terraform or Pulumi
  • Exposure to data integration and ingestion tools such as Airbyte or Segment
  • Experience with big data and distributed processing such as Apache Spark AWS EMR and S3
  • Experience maintaining and improving reporting solutions and dashboards such as Preset Superset or Cube.dev
  • Familiarity with CI/CD practices and automation such as GitHub Actions
  • Willingness to submit to background checks including employment education and criminal record checks

Responsibilities

  • Design data pipelines
  • Build data pipelines
  • Maintain robust data pipelines
  • Integrate data sources across the business
  • Collaborate with analysts and business stakeholders
  • Align timelines and discuss architecture with engineering teams
  • Document data pipelines and best practices
  • Develop and optimize data models
  • Ensure data quality security and governance
  • Orchestrate and monitor pipeline execution
  • Deploy and manage infrastructure as code
  • Build and tune big data pipelines using SQL Python and distributed processing frameworks
  • Work with cloud data warehouses to enable insights and analytics
  • Maintain and update reporting solutions and user dashboards
  • Automate workflows and improve CI/CD pipelines

Benefits

  • Competitive benefits
  • Equity
  • Unlimited vacation/holidays
  • Flexible working arrangements
  • Remote first

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник -