TL - Big Data:
The role requires tuning components for high performance & scalability using big data technologies like Pig,Hadoop, Hive, Hbase, Vertica, Redis etc and scripting languages Python, Java, scala, R etc. This will involve working with both product and tech team to help achieve a state-of-the-art environment that meets current and future business objectives.
Technical Skills Required:
Primary: Fluency in atleast one of programming or scripting languages like Python, R Ruby, scala, Java for fast prototyping. SQL fundamentals and hand on with one of RDBMS Mysql, MSSQL, postgres SQL is a must.
Principal Responsibilities:
Participate in all aspects of the development lifecycle including requirements analysis, estimating, design, implementation, testing, and release.
Essential Skills
Good analytical reasoning skills.
Good understanding of at least one of the following fields: machine learning, statistical modeling, data mining or information retrieval.
Develop and apply machine learning, and statistical analysis methods such as clustering, classification, collaborative filtering, association rules, time-series analysis, advanced regression methods and hypothesis testing experience working with large datasets and problems.
Good understanding of database fundamentals.
Experience with command-line scripting, data structures and algorithms.
Understanding of MapReduce, Pig, and/or Hive is a plus.
Competencies Required
Communicate results and educate others through reports and presentations.
Ready to work in fast pacing challenging environment.
Develop, test & implement program logic.
Translate business needs into end user applications.
Good team player.
Confident, passionate and enthusiastic attitude.
Qualification
Bachelors / Master's degree/PhD in Computer Science, Statistics, Mathematics, Physics, Operations Research, or related (quantitative) fields from premier institutes (IIT's. ISI, IISc) or with good academic track record.
Didn’t find the job appropriate? Report this Job