DevOps Engineer For Sensitive Data Detection HCLTech
HCLTech
Office Location
Full Time
Experience: 8 - 8 years required
Pay:
Salary Information not included
Type: Full Time
Location: Hyderabad
Skills: Ansible, Monitoring, Linux, Azure Kubernetes Services, CICD pipelines, Infrastructure As Code, automation solutions, helm charts, Git Lab pipelines, Terraform, Azure cloud platforms
About HCLTech
Job Description
The role of a DevOps Engineer for Sensitive Data Detection involves being responsible for a variety of key tasks including deploying cloud infrastructure/services, managing day-to-day operations, troubleshooting issues related to cloud infrastructure/services, deploying and managing AKS clusters, collaborating with development teams to integrate infrastructure and deployment pipelines in the SDLC, building and setting up new CI/CD tooling and pipelines for application build and release, and implementing migrations, upgrades, and patches in all environments. In this position, you will be a part of the Sensitive Data Detection Services team based in the Pune EON-2 Office. The team focuses on analyzing, developing, and delivering global solutions to maintain or change IT systems in collaboration with business counterparts. The team culture emphasizes partnership with businesses, transparency, accountability, empowerment, and a passion for the future. As an experienced DevOps Engineer, you will have a significant role in constructing and maintaining GIT and ADO CI/CD pipelines, establishing scalable AKS clusters, deploying applications in various environments, and more. You will work alongside a group of highly skilled engineers who excel in delivering scalable enterprise engineering solutions. To excel in this role, ideally, you should possess 8 to 12 years of experience in the DevOps field. You should have a strong grasp of DevOps principles and best practices, with practical experience in implementing CI/CD pipelines, infrastructure as code, and automation solutions. Proficiency in Azure Kubernetes Services (AKS) and Linux is crucial, including monitoring, analyzing, configuring, deploying, enhancing, and managing containerized applications on AKS. Experience in managing helm charts, ADO and Git Lab pipelines is essential. Additionally, familiarity with cloud platforms like Azure, along with experience in cloud services, infrastructure provisioning, and management, is required. You will also be expected to help implement infrastructure as code (IaC) solutions using tools such as Terraform or Ansible to automate the provisioning and configuration of cloud and on-premises environments, maintain stability in non-prod environments, support GitLab pipelines across various MS Azure resources/services, research opportunities for automation, and efficiently interact with business and development teams at all levels. Flexibility in working hours to accommodate international project setups and collaboration with cross-functional teams (Database, UNIX, Cloud, etc.) are also key aspects of this role.,