
Responsibilities:
- Strengthen in-house Data & AI Engineering capabilities on Databricks, reducing reliance on external ML specialists.
- Design, build, and maintain scalable, reliable data pipelines using Databricks Lakehouse (Data Lake, Spark, Workflows).
- Translate business requirements into data science and AI problem statements.
- Build and optimize ETL/ELT pipelines within the Databricks Lakehouse architecture.
- Enable and deploy Agentic AI and LLM-based use cases using Databricks Model Serving and MLflow.
- Orchestrate agent workflows, multi-agent pipelines, and experimentation (A/B testing, performance tracking).
- Implement data governance, security, CI/CD, and monitoring using Unity Catalog and Databricks best practices.
- Collaborate with platform and product teams to accelerate scalable AI feature delivery.
Must Have Skills:
- Strong hands-on experience with Databricks (Lakehouse, Delta Lake, Workflows, MLflow).
- Expertise in Spark and distributed data processing.
- Experience building and maintaining production-grade data pipelines.
- Experience deploying and managing AI/LLM agents within Databricks.
- Understanding of Agent orchestration and agent lifecycle management.
- Strong Python and SQL skills.
- Knowledge of Unity Catalog and data governance practices.
- Exposure on Databricks Model Serving and Feature Store.
- Experience with AWS integration (S3, Lambda, Glue).
- Understanding LLM integration and prompt engineering.
- Familiarity with monitoring and cost optimization in Databricks.
Didn’t find the job appropriate? Report this Job