No more applications are being accepted for this job
- Design, develop, and deploy scalable Big Data applications using Hadoop ecosystem technologies (Hadoop, HDFS, Hive, etc.)
- Collaborate with data scientists and business analysts to understand requirements and translate them into technical solutions
- Write efficient and optimized code to process and analyze large volumes of data
- Implement data ingestion processes from various data sources to the Big Data platform
- Create and maintain data pipelines and workflows for data processing and analytics
- Perform data quality checks and ensure data integrity throughout the system
- Troubleshoot and debug production issues to identify and resolve technical problems
- Stay updated with the latest Big Data technologies and tools to drive innovation and improve performance
- Collaborate with cross-functional teams to ensure seamless integration of Big Data applications with other systems
- Bachelor's degree in Computer Science, Engineering, or a related field
- 4-6 years of experience in Big Data development using Hadoop ecosystem technologies (Hadoop, HDFS, Hive, etc.)
- Proficient in SQL for data querying and manipulation
- Experience with data warehousing and relational databases
- Hands-on experience with data integration and ETL processes
- Strong understanding of distributed computing principles and frameworks
- Experience with scripting languages such as Python or Shell scripting
- Excellent problem-solving and analytical skills
- Strong communication and interpersonal skills
- Ability to work independently and in a team environment
- Proactive attitude towards learning and professional development