Data Engineer - Hyderabad, India - PRUDENT GLOBALTECH SOLUTIONS PRIVATE LIMITED

    PRUDENT GLOBALTECH SOLUTIONS PRIVATE LIMITED
    PRUDENT GLOBALTECH SOLUTIONS PRIVATE LIMITED Hyderabad, India

    2 weeks ago

    Default job background
    Full time
    Description

    Company Overview

    For over 25+ years Prudent Technologies & Consulting has been helping customers secure the technical and functional resources needed to deliver mission-critical IT & Business initiatives. Prudent's specialty practices include Data Sciences, Cybersecurity, App Dev, and Enterprise CRM, with partnerships with top technology firms like Salesforce, Splunk, Microsoft, and Databricks.

    Job Overview

    Mid-Level Data Engineer role with 4 to 6 years of experience in Hyderabad, Telangana, India. Full-Time employment type.

    Qualifications and Skills

    • Bachelor's or master's degree in Computer Science, Engineering, Information Technology, or a related field.
    • 3+ years of proven experience in data engineering with a strong emphasis on ETL pipeline design and API development.
    • Expertise in Python, with extensive experience using data processing libraries (e.g., Pandas, NumPy) and ETL tools.
    • Proficient in developing RESTful APIs with Flask or Django and familiar with API authentication and authorization mechanisms.
    • Solid understanding of SQL and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
    • Experience with cloud services (e.g., AWS, Google Cloud, Azure) and their data services is highly desirable.
    • Strong knowledge of data warehousing concepts, data modeling, and schema design.
    • Excellent problem-solving skills and the ability to manage complex projects with tight deadlines.
    • Strong communication skills, capable of working collaboratively with both technical and non-technical team members.
    • Roles and Responsibilities
    • Design, build, and maintain efficient, reliable, and scalable ETL pipelines to extract, transform, and load data from various sources into data stores using Python.
    • Develop RESTful APIs using Flask or Django to enable seamless data access and manipulation by internal and external systems.
    • Ensure optimal data processing architecture by implementing effective database solutions and data storage practices.
    • Collaborate with the engineering team to integrate ETL processes and APIs into application ecosystems.
    • Monitor, troubleshoot, and optimize data systems to ensure their reliability and performance.
    • Manage data security, backup, and recovery specifications to ensure data integrity and availability.
    • Document ETL processes, API endpoints, and data models, maintaining clear and concise documentation for all developed systems.
    • Keep abreast of new technologies and advocate for their adoption where they can enhance the functionality and efficiency of data systems.