OLA - Principal Engineer - Data Platform (13-15 yrs)
Job Location : Bangalore
- 6 days working
- Years of Exp : 8+ Years
- Product-based company/Startup exp : Mandatory
About us : www.olacabs.com
Principal Engineer : Data Platform
Responsibilities :
- Architect, design and build fault-tolerant & scalable big-data platforms and solution primarily based on open source technologies.
- Build solid & auto scalable architecture to address real-time workload for analytical processing, lineage, data governance, and data discoverability use cases.
- Design solutions that involve complex, multi-system integration, across BUs or domains.
- Heavy hands-on coding in the Hadoop ecosystem (Java MapReduce, Spark, Scala, HBase, Hive) and build framework(s) to support data pipelines on streaming applications.
- Work on technologies related to NoSQL, SQL, and In-Memory databases.
- Conduct code-reviews to ensure code quality, consistency and best practices adherence.
- Drive alignment between enterprise architecture and business needs.
- Conduct quick proofs-of-concept (POC) for feasibility studies and take them to the production.
- Work with the architects to standardize data platforms stacks across OLA.
- Lead fast moving development teams using agile methodologies.
- Lead by example, demonstrating best practices for unit testing, test automation, CI/CD performance testing, capacity planning, documentation, monitoring, alerting, and incident response.
- Work with cross-functional team members from Architecture, Product Management, Q/A and Production Operations to develop, test, and release features.
- Be a role model to software engineers pursuing a technical career path in engineering
Must Haves :
- 13+ years of relevant experience with at least 6+ years hands-on coding in the big data domain.
- Should have experience in architecting data ecosystem for streaming data and analytical platforms.
- Expert level experience in building fault-tolerant & scalable big-data platforms and big-data solutions primarily based on the Hadoop ecosystem.
- Expert level experience with Java, Python or Scala programming.
- Expert level experience designing high throughput data services.
- Familiarity with machine learning and AI.
- Experience with Big-Data Technologies (Hive, HBase, Spark, Kafka, Storm, MapReduce, HDFS, Zookeeper, Scylla, Cassandra, Yarn), understands the concepts and technology ecosystem around both real-time and batch processing in Hadoop.
- Strong spoken and written communication skills.
- B.E/B.Tech/MS in Computer Science (or equivalent).
- Effective listening skills and strong collaboration
The apply button will redirect you to an external URL, please apply there as well.
This job opening was posted long time back. It may not be active. Nor was it removed by the recruiter. Please use your discretion.