Sr. Data Engineer - New Delhi, India - MDMS Recruiting LLC

    MDMS Recruiting LLC
    MDMS Recruiting LLC New Delhi, India

    2 weeks ago

    Default job background
    Description
    Job Description
    W2 or Self Inc. Only | this is not open to subcontracting (no C2C)Senior Data Engineer to help the design/build of the enterprise data platformsDesign, Develop and Implement

    End-to-end data solutions (ingest, storage, integration, processing, access) on AWS.

    Data intake/ request/ onboarding services and service documentation.

    Data ingestion services for batch/ real time data ingest and service documentation.

    Data processing services for batch/ real time (Glue/ Kinesis/ EMR) and service documentation.

    Data storage services for data lake (S3)/ data warehouses (RDS/ Redshift)/ data marts and service documentation.

    Data services layer including Athena, Redshift, RDS, microservices and APIs.

    Pipeline orchestration services including lambda, step functions, MWAA (optional).

    Data security services (IAM/ KMS/ SM/ encryption/ anonymization/ RBAC) and service documentation.

    Data access provisioning services (Accounts, IAM Roles RBAC), processes, documentation and education.

    Data provisioning services for data consumption patterns including microservices, APIs and extracts.

    Metadata capture and catalog services for data lake (S3/ Athena), data warehouses (RDS/ Redshift), Microservices/ APIs.

    Metadata capture and catalog services for pipeline/ log data for monitoring / support.

    Architect and implement CI/ CD strategy for EDP .

    Implement high velocity streaming solutions using Amazon Kinesis, SQS, and SMS.


    Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift.

    Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.

    Implement cost/ spend monitoring for AWS based data platform.

    Implement audit reporting for access of AWS based data platform.


    Contribute to the implementation of a data strategy to enable a robust, holistic view of data - driven decision making.

    Partner with immediate engineering team, product owner, IT, partners on EDP agenda.

    Leverage and continuously develop best practices, standards, and frameworks.

    Provide technology thought leadership, consulting, and coaching/ mentoring.

    Work with scrum master to develop and own backlog, stories, epics, sprints.

    System design and Architecture for products/ applications for EDP.


    Work closely with various stakeholders (BI, AIML, MDM and other teams) to understand their use-cases and design optimal solution for them.

    Execute technical design and infrastructure strategy for EDP .

    Implement POCs on any new technology or tools to be implemented on EDP and onboard for real use-case.

    Qualifications

    Bachelor's degree in computer science, Software Engineering, MIS or equivalent combination of education and experience.

    Experience implementing, supporting data platforms on AWS for large enterprises.

    Full stack development experience building secure internal facing applications.

    Programming experience with Java, Python/ Scala, Shell scripting.


    Solid experience of AWS services such as CloudFormation, S3, Glue, EMR/ Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.

    Solid experience implementing solutions on AWS based data lakes preferred.

    Experience implementing metadata solutions leveraging AWS non-relational data solutions such as ElastiCache and DynamoDB.

    AWS Solutions Architect or AWS Big Data Certification preferred.

    Experience in AWS data lake, data warehouse and business analytics.


    Experience and understanding of various core AWS services such as IAM, Cloud Formation, EC2, S3, EMR/Spark, Glue, Datasync, CloudHealth, CloudWatch, Lambda, Athena, and Redshift.

    Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.

    Experience with DevOps and Continuous Integration/ Delivery (CI/ CD) concepts and tools.

    Experience with business intelligence tools such as Tableau, Power BI or equivalent.

    Knowledge of ETL/ ELT.

    Awareness of Data Management & Governance tools.

    Working experience with Hadoop, HDFS, SQOOP, Hive, Python, and Spark is desired.

    Experience working on Agile projects.

    Requirementsdata engineering, AWS, data lake / data warehouses