Назад
Company hidden
1 день назад

Senior DevOps Engineer

Формат работы
hybrid
Тип работы
fulltime
Грейд
senior
Английский
b2
Страна
India
Вакансия из списка Hirify.GlobalВакансия из Hirify Global, списка международных tech-компаний
Для мэтча и отклика нужен Plus

Мэтч & Сопровод

Для мэтча с этой вакансией нужен Plus

Описание вакансии

Текст:
/

TL;DR

Senior DevOps Engineer: Strengthening and scaling cloud foundation across Azure and AWS by designing, automating, and maintaining mission-critical infrastructure, and owning Infrastructure as Code practices. Focus on building secure and reliable CI/CD workflows, enabling modern data workloads through Databricks, and driving automation with Python.

Location: Hybrid role, based in Hyderabad, Telangana, India.

Company

hirify.global helps life sciences companies optimize commercialization through strategic insight, advanced analytics, and technology solutions.

What you will do

  • Develop and maintain Terraform modules for provisioning and managing cloud resources, applying best practices for state management and module design.
  • Architect and operate services in Azure and AWS, including provisioning and managing Databricks workspaces, clusters, and jobs.
  • Build, test, and maintain Azure DevOps Pipelines or GitHub Actions for automating infrastructure provisioning and application deployment.
  • Partner with client and data engineering teams to integrate infrastructure changes safely into development workflows, producing clear documentation.
  • Write and maintain automation scripts and CLI tools in Python to streamline operational tasks and contribute to internal SDKs.

Requirements

  • 3+ years of experience in a DevOps or SRE role with production workloads in Azure and AWS.
  • Deep hands-on Terraform experience, including Terraform Cloud, Terragrunt, module design, and testing.
  • Proven ability in building and maintaining Azure DevOps Pipelines and managing code in Azure DevOps or GitHub repos.
  • Strong Python scripting skills, capable of writing clean, testable, and reusable code.
  • Familiarity with provisioning and managing Databricks workspaces, clusters, and jobs.
  • Experience designing disaster recovery plans, automated backups, and conducting restore drills.
  • Solid understanding of core Azure and AWS services (VMs, networking, storage, IAM, RDS/SQL).
  • Excellent troubleshooting abilities and clear written and verbal communication.

Nice to have

  • Experience with CDK for Terraform (CDKTF) or Terragrunt for composing infrastructure patterns.
  • Familiarity with PowerShell and Bicep languages.
  • Knowledge of cloud security best practices (CIS benchmarks, Azure Policy, AWS IAM policies).
  • In-depth knowledge of VPCs, VNets, load balancers, VPNs, and cross-region connectivity.
  • Experience with serverless platforms (Azure Functions, AWS Lambda) and container registries.

Culture & Benefits

  • Join a highly collaborative, values-driven team focused on technical excellence, analytical rigor, and personal growth.
  • Opportunity to make an impact in AI innovation, building commercialization strategies, and shaping data-first solutions in life sciences.

Будьте осторожны: если работодатель просит войти в их систему, используя iCloud/Google, прислать код/пароль, запустить код/ПО, не делайте этого - это мошенники. Обязательно жмите "Пожаловаться" или пишите в поддержку. Подробнее в гайде →

Текст вакансии взят без изменений

Источник - загрузка...