Big Data Engineer Hucon Solutions India Pvt.Ltd.
Hucon Solutions India Pvt.Ltd.
Office Location
Full Time
Experience: 6 - 6 years required
Pay: INR 1800000 - INR 3000000 /year
Type: Full Time
Location: Bangalore
Skills: Apache Spark, Hadoop, big data, Scala, big data engineer
About Hucon Solutions India Pvt.Ltd.
Job Description
Job Title: Big Data Engineer Hadoop, Spark, Scala
Experience: 6 -10 years
Salary Range: 15 - 30 LPA
Locations: Chennai, Bangalore
Job Summary:
We are looking for a highly skilled Big Data Engineer with expertise in Hadoop, Apache Spark, and Scala to join our data engineering team. The ideal candidate will design and develop robust, scalable big data solutions for processing high-volume datasets and enabling data-driven insights across the organization.
Key Roles & Responsibilities (R&Rs):
1. Data Pipeline Development
-
Design and implement large-scale data processing pipelines using Hadoop, Spark, and Scala.
-
Handle both batch and real-time data processing with high efficiency and fault tolerance.
-
Develop custom components and utilities for data ingestion and transformation.
2. Distributed Systems Engineering
-
Work with HDFS, YARN, and related Hadoop components for large-scale data storage and access.
-
Optimize Spark jobs for performance and reliability in a distributed environment.
-
Tune resource allocation, data partitioning, and memory usage to maximize throughput.
3. Data Integration & Workflow Orchestration
-
Integrate data from multiple internal and external sources (databases, APIs, files).
-
Automate workflows using Apache Airflow, Oozie, or similar orchestration tools.
-
Ensure high data quality, consistency, and availability across the pipeline.
4. Collaboration & Documentation
-
Partner with analysts, data scientists, and platform teams to understand requirements and deliver data solutions.
-
Document data processes, transformation logic, and pipeline architecture.
-
Participate in design reviews and contribute to code and process improvements.
Required Skills & Qualifications:
-
Bachelor's or Masters degree in Computer Science, Engineering, or related field.
-
6-10 years of hands-on experience in Big Data Engineering.
-
Proficient in Scala programming, and strong experience with Apache Spark (core, SQL, streaming).
-
Deep understanding of the Hadoop ecosystem: HDFS, YARN, Hive, Sqoop, etc.
-
Familiarity with data partitioning, performance tuning, and distributed processing concepts.
-
Strong experience working with large-scale datasets in production environments.
Preferred Skills (Nice to Have):
-
Experience with cloud-native big data platforms (e.g., AWS EMR, Azure HDInsight, GCP Dataproc).
-
Knowledge of Kafka, NoSQL databases (e.g., Cassandra, HBase).
-
Exposure to CI/CD, Docker/Kubernetes, and DevOps practices.
-
Relevant certifications (Cloudera, Hortonworks, Databricks) are a plus.