- Minimum 8 years of experience in building and maintaining robust data pipelines, enriching data, and delivering reliable data streams.
- Experience handling complex, high volume, multi-dimensional data and architecting data products in streaming, serverless and microservices based Architecture and platform.
- Experience in Data warehousing, Data modelling, and Data architecture
- Expert level proficiency with relational and NoSQL database.
- Expert level proficiency in Python, and PySpark. Familiarity with Scala and Java
- Experience in Big Data technologies and utilities (Hadoop, Spark SQL, Hive, Impala, Pig, Kafka, Airflow)
- Experience leading large-scale analytics projects using AWS services - RDS, S3, EC2, Lambda, EMR, Sagemaker
Key Roles/Responsibilities:
- Act as a technical leader for resolving problems, with both technical and non-technical audiences
- Identifying and solving issues with data pipelines regarding consistency, integrity, and completeness
- Lead data initiatives, architecture design discussions and implementation of next generation BI solutions
- Partner with data scientists, tech architects to build advanced, scalable, efficient self-service BI infrastructure
- Provide thought leadership and mentor data engineers in information presentation and delivery.
Didn’t find the job appropriate? Report this Job