SRE-Data Engineer UNIMORPH CONSULTING LLP

  • company name UNIMORPH CONSULTING LLP
  • working location Office Location
  • job type Full Time

Experience: 3 - 3 years required

Pay: INR 700000 - INR 1400000 /year

Type: Full Time

Location: Pune

Skills: Jenkins, spark, Kubernetes, Kafka, airflow, Hadoop, Docker, hive, Oracle, Prometheus

About UNIMORPH CONSULTING LLP

Job Description

As a Hiring partner we are hiring SRE-Data Engineers for Pune location, these are direct full time and on the payroll of Hiring organization as their full time employees.

interested candidates can share word format resume with ctc,np details at info@unimorphtech.com

Note : We are looking for immediate joiners who can join within 30-days

 

Role : SRE-Data Engineer
Experience : 3- 6Yrs
Location : Pune

# Must Have Skills
Airflow, Hands-on with Spark, Hive, Ranger, Docker, Jenkins CI/CD pipelines,Strong SQL skills (Oracle/Exadata preferred), Hadoop, Kafka, and Zookeeper, Yarn, Apache NiFi, Kubernetes

# Description :

We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform.
You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain.

# Key Responsibilities

  • Ensure platform uptime and application health as per SLOs/KPIs
  • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc.
  • Debug and resolve complex production issues, performing root cause analysis
  • Automate routine tasks and implement self-healing systems
  • Design and maintain dashboards, alerts, and operational playbooks
  • Participate in incident management, problem resolution, and RCA documentation
  • Own and update SOPs for repeatable processes
  • Collaborate with L3 and Product teams for deeper issue resolution
  • Support and guide L1 operations team
  • Conduct periodic system maintenance and performance tuning
  • Respond to user data requests and ensure timely resolution
  • Address and mitigate security vulnerabilities and compliance issues Technical Skillset
  • Hands-on with Spark, Hive,Hadoop, Kafka, Ranger
  • Strong Linux fundamentals and scripting (Python, Shell)
  • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper
  • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki
  • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines
  • Strong SQL skills (Oracle/Exadata preferred)
  • Familiarity with DataHub, DataMesh, and security best practices is a plus
  • SHIFT - 24/7