No more applications are being accepted for this job
- Design, develop, and implement ETL pipelines using PySpark
- Leverage your expertise in SQL to wrangle and transform data (think advanced concepts like CTEs and Window functions)
- Utilize Python for data collection, validation, diagnostics, and pipeline building (think Pandas mastery)
- Collaborate with data scientists and analysts to understand data needs and translate them into technical solutions
- Experience with AWS Cloud, particularly S3, Step Functions, and Glue, is a huge plus (knowledge of Athena and Quicksight is a bonus)
- Proven experience as a Data Engineer with a strong understanding of data architectures and ETL processes
- Expertise in SQL, including advanced concepts like CTEs and Window functions
- Proficiency in Python with experience in libraries like Pandas and NumPy
- Experience developing ETL pipelines, specifically with PySpark
- Familiarity with Databricks is a major plus
- Experience with AWS Cloud services, particularly S3, Step Functions, and Glue
- Experience with data governance, security, and compliance best practices
- Strong communication and collaboration skills
- A passion for learning and staying up-to-date with the latest data technologies
Data Engineer - Bengaluru, India - CareerXperts Consulting
Description
Data Engineer - Become a Data Whisperer
Are you passionate about building the foundation for data-driven decision making?
We're looking for a talented Data Engineer to join our team and play a key role in unlocking the power of our data. You'll design, develop, and maintain data pipelines that transform raw data into valuable insights for our business.
What You'll Do:
The Ideal Candidate:
Bonus Points:
Ready to join a team that values innovation and collaboration? We are Apply today Please write to to get connected
#DataEngineering #Python #SQL #AWS #ETL