All about client:
Margo Networks is building an on-demand entertainment platform which enables consumers to access relevant content at super-fast speeds. My client is a well known setup zones across Transport, Hospitality, Retail, Healthcare and even Public Places so that everyone can enjoy high quality, buffer-free, on-demand entertainment - without being dependent on access to 4G or broadband
EXPERIENCE:
- 10 to 15 years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
- Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage.
- Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop (YARN, MR,HDFS) and associated technologies - one or more of Hive, HBase, Map Reduce,Sqoop, Avro, Flume, Oozie, Zookeeper, etc.
- Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage.
- Operating knowledge of cloud computing platforms (AWS/Azure ML)
- Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
- Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works
In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.
SKILL SETS:
Must Have (hands-on):
- Scala or Python expertise
- Linux environment and shell scripting
- Distributed computing frameworks (Hadoop or Spark)
Desirable (would be a plus):
- Statistical or machine learning DSL like R
- Distributed and low latency (streaming) application architecture
- Row store distributed DBMSs such as Cassandra
- Familiarity with API design
EDUCATION: B.E/B.Tech in Computer Science or related technical degree
Required Candidate profile :
Looking for candidate who is well versed with Kafka, Scala & various Hadoop ecosystems like HDFS. Hive, Mapreduce, Oozie, etc.
Didn’t find the job appropriate? Report this Job