Data Engineer (AWS)
ΠΡΡΡ & Π‘ΠΎΠΏΡΠΎΠ²ΠΎΠ΄
ΠΠ»Ρ ΠΌΡΡΡΠ° Ρ ΡΡΠΎΠΉ Π²Π°ΠΊΠ°Π½ΡΠΈΠ΅ΠΉ Π½ΡΠΆΠ΅Π½ Plus
ΠΠΏΠΈΡΠ°Π½ΠΈΠ΅ Π²Π°ΠΊΠ°Π½ΡΠΈΠΈ
TL;DR
Data Engineer (AWS/Python): Building and optimizing data pipelines and infrastructure for large-scale data processing with an accent on scalability, automation, and cloud integration. Focus on designing robust ETL workflows, implementing data governance, and integrating AWS services to support business intelligence.
Location: Washington, DC (TS/SCI clearance required)
Company
is a professional services firm providing technical data solutions to government clients.
What you will do
- Design and implement scalable data pipelines and internal process improvements to optimize data delivery.
- Develop and maintain ETL artifacts including schemas, data dictionaries, and transforms.
- Integrate data pipelines with AWS cloud services to extract meaningful insights from raw data.
- Collaborate with stakeholders and product owners to align data infrastructure with business objectives.
- Provide Tier 3 technical support for deployed applications and dataflows.
- Coordinate and document dataflow capabilities and maintain product roadmaps.
Requirements
- Active TS/SCI security clearance is required.
- Bachelorβs degree or equivalent practical experience.
- Expertise in distributed computing frameworks for large-scale data processing.
- Proficiency in Python (pandas, PySpark), NiFi, Airflow, and AWS Lambda.
- Experience with datastores: PostgreSQL, S3, Redshift, MongoDB/DynamoDB, Redis, Elasticsearch/OpenSearch, and SQL.
- Knowledge of Docker, Kubernetes, JMS/SQS, SNS, and Kafka.
ΠΡΠ΄ΡΡΠ΅ ΠΎΡΡΠΎΡΠΎΠΆΠ½Ρ: Π΅ΡΠ»ΠΈ ΡΠ°Π±ΠΎΡΠΎΠ΄Π°ΡΠ΅Π»Ρ ΠΏΡΠΎΡΠΈΡ Π²ΠΎΠΉΡΠΈ Π² ΠΈΡ ΡΠΈΡΡΠ΅ΠΌΡ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ iCloud/Google, ΠΏΡΠΈΡΠ»Π°ΡΡ ΠΊΠΎΠ΄/ΠΏΠ°ΡΠΎΠ»Ρ, Π·Π°ΠΏΡΡΡΠΈΡΡ ΠΊΠΎΠ΄/ΠΠ, Π½Π΅ Π΄Π΅Π»Π°ΠΉΡΠ΅ ΡΡΠΎΠ³ΠΎ - ΡΡΠΎ ΠΌΠΎΡΠ΅Π½Π½ΠΈΠΊΠΈ. ΠΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ ΠΆΠΌΠΈΡΠ΅ "ΠΠΎΠΆΠ°Π»ΠΎΠ²Π°ΡΡΡΡ" ΠΈΠ»ΠΈ ΠΏΠΈΡΠΈΡΠ΅ Π² ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΊΡ. ΠΠΎΠ΄ΡΠΎΠ±Π½Π΅Π΅ Π² Π³Π°ΠΉΠ΄Π΅ β