Senior Data Engineer, Data Platform (USA)
Мэтч & Сопровод
Для мэтча с этой вакансией нужен Plus
Описание вакансии
TL;DR
Senior Data Engineer, Data Platform (USA): Architect and build core services, automation tools, and integrations powering a healthcare data ecosystem with an accent on scalable backend services, pipeline orchestration, and data quality monitoring. Focus on improving platform reliability, observability, and developer experience through cross-functional collaboration.
Location: Totally remote within the contiguous United States. Must live in contiguous US states and have US citizenship, Green Card, or be sole proprietor of LLC. No visa sponsorship (H1B, OPT, EAD, CPT, etc.). Work hours - US Eastern time.
Company
Global technology services company with 4,000+ professionals across 130+ countries, specializing in cloud architecture, infrastructure, migration, and optimization for Fortune 500 and startups.
What you will do
- Build scalable backend services, APIs, and internal tools to automate data platform workflows like data onboarding, validation, pipeline orchestration, schema tracking, and quality monitoring.
- Integrate tools with core data infrastructure, building pipelines using Airflow, Spark, dbt, Kafka, Snowflake or similar to expose capabilities via APIs and UIs.
- Develop visualization and monitoring components for data lineage, job health, and quality metrics.
- Collaborate cross-functionally with data engineering, product, and DevOps teams to define requirements and deliver end-to-end solutions.
Requirements
- Live in contiguous United States with necessary documentation for independent contractor agreement. No sponsorships or visas (H1B, OPT, EAD, CPT).
- 7+ years in data engineering or software development, with 5+ years building production-grade data or platform services.
- Strong Python & SQL skills and experience with at least one major data platform (Snowflake, BigQuery, Redshift or similar).
- Deep experience with streaming, distributed compute, or S3-based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
- Experience with schema governance, metadata systems, data quality frameworks, orchestration tools (Airflow, Dagster, Prefect), CI/CD, Docker, and building data pipelines with dbt.
- At least 2 years in AWS.
Nice to have
- Experience with data observability, data catalog, or metadata management tools.
- Working with healthcare data (X12, FHIR).
- Data migration projects from legacy to modern technologies.
- Building internal developer platforms or data portals.
- Understanding of authentication/authorization (OAuth2, JWT, SSO).
Culture & Benefits
- Totally remote within contiguous US, full-time 40h/week, stable long-term independent contract.
- Work during US Eastern time office hours.
Hiring process
- Recruiter interview (30 min).
- Technical interview (45 min).
- Screening with client's hiring manager (30 min).
- Client technical interview (45 min).
Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →