Data Engineer - Hyderabad, India - WaferWire Cloud Technologies

    WaferWire Cloud Technologies
    WaferWire Cloud Technologies Hyderabad, India

    Found in: Appcast Linkedin IN C2 - 1 week ago

    Default job background
    Description

    Hi,

    This is Sundeep from Waferwire Technologies and we are hiring Data Engineer with Azure Synapses and Pyspark and ADF experience

    Role: Data Engineer

    Experience: 3-8 Years

    Location: Hyderabad

    Job Description:

    We are seeking a seasoned Data Engineer to join our team in Hyderabad. The ideal candidate will have a strong background in data engineering, with extensive experience in SQL, data pipelines, Azure Synapses Data Warehouses, and data modeling.

    Responsibilities:

    • Develop, construct, test, and maintain architectures such as databases and large-scale processing systems.
    • Create and maintain optimal data pipeline architecture.
    • Assemble large, complex data sets that meet functional / non-functional business requirements.
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
    • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Azure Data Factory, Airflow, and other 'big data' technologies.
    • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

    Must Have:

    • Bachelor's or master's degree in computer science, Engineering, or a related field, or equivalent work experience.
    • Strong analytic skills related to working with unstructured datasets.
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
    • Experience building and optimizing 'big data' data pipelines, architectures, and data sets.
    • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
    • Strong experience with Azure Synapses Data Warehouses.
    • Strong understanding of schemas, data models, facts, dimensions, and semantic models.
    • Experience with data pipeline and workflow management tools: Azure Data Factory, Airflow, etc.
    • A successful history of manipulating, processing, and extracting value from large disconnected datasets.

    Good to Have:

    • Exposure to Microsoft Fabric, Azure Databricks.

    Regards,

    Sundeep