Data Engineer - Bengaluru, India - Mopid

    Mopid
    Default job background
    Description

    Job Title: Data Engineer (Java Backend)

    Location: Bangalore

    Experience: 3 to 6 years

    Employment Type: Full Time

    Job Description:

    We are seeking a talented and experienced Data Engineer proficient in Java to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines, ETL processes, and data infrastructure to support our organisation's data needs. The ideal candidate should have a strong background in Java development with a focus on data engineering, along with excellent problem-solving skills and a passion for working with large datasets.

    Responsibilities:

    1. Design, develop, and maintain scalable and efficient data pipelines and ETL processes using Java technologies.

    2. Collaborate with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and design appropriate solutions.

    3. Optimise and fine-tune existing data pipelines for improved performance and reliability.

    4. Build and maintain data infrastructure, including data warehouses, data lakes, and data processing systems.

    5. Ensure data quality and integrity throughout the data pipeline by implementing effective data validation and monitoring strategies.

    6. Troubleshoot data-related issues and implement timely resolutions to minimise impact on business operations.

    7. Stay up-to-date with emerging technologies and trends in data engineering and incorporate them into our data infrastructure and processes.

    8. Document design specifications, technical solutions, and best practices for knowledge sharing and future reference.

    Requirements:

    1. Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

    2. 3 to 6 years of professional experience in software development with a focus on data engineering.

    3. Strong proficiency in Java programming language, with hands-on experience in building scalable applications and data pipelines.

    4. Experience with distributed computing frameworks such as Apache Hadoop, Spark, or Flink.

    5. Proficiency in SQL and experience with relational databases like MySQL, PostgreSQL, or Oracle.

    6. Experience with NoSQL databases such as MongoDB, Cassandra, or Redis is a plus.

    7. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform, and experience with related services like AWS Glue, Azure Data Factory, or Google BigQuery.

    8. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and implement effective solutions.

    9. Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.

    10. Proven ability to manage multiple priorities and deliver results in a fast-paced and dynamic environment.

    Preferred Qualifications:

    1. Experience with data visualisation tools such as Tableau, Power BI, or Looker.

    2. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.

    3. Knowledge of stream processing frameworks such as Apache Kafka or Apache Storm.

    4. Experience with agile software development methodologies like Scrum or Kanban.

    Join us in building innovative data solutions that drive business growth and unlock insights from large-scale data sets. If you have a passion for data engineering and thrive in a collaborative environment, we'd love to hear from you