Data Engineer - Chennai, India - Talent500

    Talent500
    Talent500 Chennai, India

    Found in: Appcast Linkedin IN C2 - 1 week ago

    Talent500 background
    Description

    Title of the role:Data Engineer

    Short Description for Internal Candidates:

    Ford Pro Tech –Data Services Product Group is a global team responsible for the strategy & delivery of 360 data services empowering Ford Pro modern business intelligence and advanced analytics solutions. This enables Ford Pro to operate as an analytics-driven business in a competitive marketplace. We are seeking experienced Data Engineering professionals to add to our product teams to drive the next generation of Commercial Business. DS Cloud Data Engineers work in a Product Management Delivery model working end-to-end with business partners, analytics managers, product owners, product designers, data scientists, business intelligence engineers, and software developers to deliver analytics innovation.

    Description for Internal Candidates:

    This is an exciting and fast-paced role, which requires exceptional technical data and Data Engineering skills with the opportunity to create a huge impact to Ford Pro Commercial business.

    Pro 360 Data Platforms & Engineering Product Line: Leads design and product teams with create the operational core data capability for Ford Pro Tech. Includes ETL processes, multi-tier data model designs and implementations, data quality strategy & monitoring, plus domain expertise of data sets in the environment, third-party data evaluations, data quality, creating intelligence layers for ease of use.

    Responsibilities for Internal Candidates:

    As a Data Engineer, you will leverage your technical expertise in data, analytics, cloud technologies, and analytic software tools to identify best designs, improve business processes, and generate measurable business outcomes. You will work with Data Engineering teams from within D&A, across the Pro Tech portfolio and additional Ford organisations such as GDI&A (Global Data Insight & Analytics), Enterprise Connectivity, Ford Customer Service Division, Ford Credit, etc.

    Job Description:

    • Develop EL / ELT / ETL pipelines to make data available in BigQuery analytical data store from disparate batch, streaming data sources for the Business Intelligence and Analytics teams.
    • Work with on-prem data sources (Hadoop, SQL Server), understand the data model, business rules behind the data and build data pipelines (with GCP, Informatica) for one or more Ford Pro verticals. This data will be landed in GCP BigQuery.
    • Build cloud-native services and APIs to support and expose data-driven solutions.
    • Partner closely with our data scientists to ensure the right data is made available in a timely manner to deliver compelling and insightful solutions.
    • Design, build and launch shared data services to be leveraged by the internal and external partner developer community.
    • Building out scalable data pipelines and choosing the right tools for the right job. Manage, optimise and Monitor data pipelines.
    • Provide extensive technical, strategic advice and guidance to key stakeholders around data transformation efforts. Understand how data is useful to the enterprise.
    • Qualifications for Internal Candidates
    • Minimum Qualifications:
    • Bachelor's Degree
    • 3+ years of experience with SQL and Python
    • 2+ years of experience with GCP or AWS cloud services; Strong candidates with 5+ years in a traditional data warehouse environment (ETL pipelines with Informatica) will be considered
    • 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner.
    • Comfortable with a broad array of relational and non-relational databases.
    • Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse)

    Preferred Skills:

    • Experience with GCP cloud services including BigQuery, Cloud Composer, Dataflow, Cloud SQL, GCS, Cloud Functions and Pub/Sub.
    • Inquisitive, proactive, and interested in learning new tools and techniques.
    • Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka.
    • 1+ year experience with Hive, Spark, Scala, JavaScript.
    • Strong oral, written and interpersonal communication skills
    • Comfortable working in a dynamic environment where problems are not always well-defined.
    • M.S. in a science-based program and/or quantitative discipline with a technical emphasis.