Назад
Company hidden
4 часа назад

Software Engineer I (Data Engineering)

Формат работы
onsite
Тип работы
fulltime
Грейд
junior
Английский
b2
Страна
US
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Software Engineer I (Data Engineering): Developing and maintaining hirify.global's data solutions, leveraging big data technologies like Google Cloud, Hive, and AWS Big Data Systems with an accent on gaining insights into customer experiences. Focus on building data frameworks, ingestion pipelines, and tools to clean, transform, and validate data for analytics and reporting.

Location: San Diego, California; Mountain View, California

Company

hirify.global is the global financial technology platform that powers prosperity for the people and communities we serve.

What you will do

  • Participate in the entire product lifecycle for data, software products, and services.
  • Develop and maintain hirify.global’s data solutions, applying software engineering methodologies and industry best practices for data products.
  • Clean, transform, and validate data for use in analytics and reporting.
  • Monitor data quality and pipeline performance, troubleshoot, and resolve data issues.
  • Design and develop ETL jobs across multiple big data platforms and tools including S3, EMR, Hive, Spark SQL, and PYSpark.
  • Actively stay abreast of industry best practices, share learnings, and experiment and apply cutting edge technologies.

Requirements

  • BS or MS in Computer Science, Data Engineering, or related field.
  • 1+ years of core development experience.
  • Proficiency in developing Software for Java (Spring & Springboot), Scala for spark streaming & spark applications, or other JVM based languages.
  • Working Knowledge of SQL, XML, JSON, YML, very strong Python and Linux.
  • Knowledgeable with tools and frameworks Docker, Spark, Scala, Jupiter Notebook, Databricks Notebook, Kubernetes, Feature Management Platforms, SageMaker.
  • Experience with cloud platforms such as AWS, Azure, or GCP - Amazon web services: EC2, S3, and EMR (Elastic Map Reduce) or equivalent cloud computing approaches.

Nice to have

  • Experience with low-latency NoSQL datastores (such as DynamoDB, HBase, Cassandra, MongoDB).
  • Experience with building stream-processing applications using Spark Streaming, Flink, etc.
  • Exposure to unit testing frameworks.
  • Ability to research and integrate 3rd party solutions.
  • Evolving a mature code base into new technologies.

Culture & Benefits

  • Competitive compensation package with a strong pay for performance rewards approach.
  • Eligible for a cash bonus, equity rewards, and benefits.
  • Regular comparisons across categories of ethnicity and gender to drive ongoing fair pay for employees.

Будьте осторожны: если вас просят войти в iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...