Databricks Engineer - AWS/PySpark Risk Resources
Risk Resources
Office Location
Full Time
Experience: 3 - 3 years required
Pay:
Salary Information not included
Type: Full Time
Location: Maharashtra
Skills: Data Engineering, glue, IAM, Data Modeling, data warehousing, data governance, data security, Databricks on AWS, pyspark, AWS Services, S3, Data Processing Workflows, Data Pipeline Management
About Risk Resources
Job Description
You are an experienced Databricks on AWS and PySpark Engineer being sought to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and optimizing data processing workflows with PySpark. Collaboration with data scientists and analysts to develop data models and ensure data quality, security, and compliance with industry standards will also be a key responsibility. Your main tasks will include troubleshooting data pipeline issues, optimizing performance, and staying updated on industry trends and emerging data engineering technologies. You should have at least 3 years of experience in data engineering with a focus on Databricks on AWS and PySpark, possess strong expertise in PySpark and Databricks for data processing, modeling, and warehousing, and have hands-on experience with AWS services like S3, Glue, and IAM. Your proficiency in data engineering principles, data governance, and data security is essential, along with experience in managing data processing workflows and data pipelines. Strong problem-solving skills, attention to detail, effective communication, and collaboration abilities are key soft skills required for this role, as well as the capability to work in a fast-paced and dynamic environment while adapting to changing requirements and priorities.,