Senior Engineer- MLOPs + GCP Fractal

  • company name Fractal
  • working location Office Location
  • job type Full Time

Experience: 5 - 5 years required

Pay:

Salary Information not included

Type: Full Time

Location: Karnataka

Skills: Machine Learning, model development, SQL, Git, Cloud Computing, GCP, MLOps, data scientists, Data engineers, Model operations, Model tracking, Model Experimentation, Model automation, ML pipelines, MLOps components, Model Repository, MLflow, Kubeflow Model Registry, machine learning services, Kubeflow, DataRobot, HopsWorks, Dataiku

About Fractal

Job Description

About the job Company Overview: Fractal Analytics is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work Institute and recognized as a Cool Vendor and a Vendor to Watch by Gartner. Job Location: Bangalore, Pune, Gurgaon, Noida, Mumbai, Chennai, Hyderabad and Coimbatore Job Description: Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients. Job Responsibilities: As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. Youll help automate and streamline Model development and Model operations. Youll build and maintain tools for deployment, monitoring, and operations. Youll also troubleshoot and resolve issues in development, testing, and production environments. Enable Model tracking, model experimentation, Model automation Develop scalable ML pipelines Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS Work across all phases of Model development life cycle to build MLOPS components Build the knowledge base required to deliver increasingly complex MLOPS projects on the Cloud- GCP Be an integral part of client business development and delivery engagements across multiple domains. Job Qualifications: Required Qualifications: 5-9 years" experience building production-quality software Strong experience in System Integration, Application Development or DataWarehouse projects across technologies used in the enterprise space Basic Knowledge of MLOps, machine learning and docker Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) Experience developing CI/CD components for production ready ML pipeline. Database programming using any flavors of SQL Knowledge of Git for Source code management Ability to collaborate effectively with highly technical resources in a fast-paced environment Ability to solve complex challenges/problems and rapidly deliver innovative solutions Team handling, problem solving, project management and communication skills & creative thinking Foundational Knowledge of Cloud Computing on GCP Hunger and passion for learning new skills Education: B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent.,