CA0149 | Sr.Data Engineer Cloudangles

  • company name Cloudangles
  • working location Office Location
  • job type Full Time

Experience: 4 - 4 years required

Pay:

Salary Information not included

Type: Full Time

Location: Noida

Skills: SQL, Kafka, ETL, Data Analysis, Bi Tools, devops, Hadoop, spark, Snowflake, Java, Scala, Python, Oracle, Db2, Excel, Tableau, MicroStrategy, Tensorflow, PyTorch, MLPACK, Teradata, PowerBI

About Cloudangles

Job Description

Job Overview Sr. Data Engineer II performs development activities within a data engineering team and helps guide, onboard, and train Data Engineer I. You will work closely with product management, engineering, account management, ETL, data warehouse, business intelligence, and reporting teams as you develop data pipelines and enhancements and investigate and troubleshoot issues. You possess an understanding of multiple data structures, including relational and non-relational data models. Roles And Responsibilities Extracting, cleansing, and loading data. Building data pipelines using SQL, Kafka, and other technologies. Investigation and documentation of new data sets Triage incoming bugs and incidents. Perform technical operation tasks. Investigate and troubleshoot issues with data and data pipelines. Participation in sprint refinement, planning, and kick-off to help estimate stories, raise awareness and additional implementation details. Help monitor areas of the data pipeline and raise awareness to the team when issues arise. Performing and implementing new quality assurance rules to maintain consistent and accurate data. Knowledge A solid understanding of data science concepts is required Data analysis expertise Working knowledge of ETL tools Knowledge of BI tools Handling DevOps Task is preferable Experience with Big Data technologies such as Hadoop and Kafka Extensive experience with ML frameworks and libraries including TensorFlow, Spark, PyTorch, and MLPACK Skills (Technical) Experience designing and implementing a full-scale data warehouse solution based on Snowflake. A minimum of three years experience in developing production-ready data ingestion and processing pipelines using Java, Spark, Scala, Python. Experience with complex data warehouse solutions on Teradata, Oracle, or DB2 platforms with 2 years of hands-on experience Expertise and excellent proficiency with Snowflake internals and integration of Snowflake with other technologies for data processing and reporting. Experience or Knowledge of Excel and several analytical tools such as Tableau, MicroStrategy, PowerBI would be added advantage Abilities (Competencies) Work Independently. Collaborates with team members. Self-motivated. Typical Experience 4 - 6 years,