- brave it out to go do the next; "what will be" from "what is"
- Reimagine the way the world works to help businesses improve the daily lives of people and the world.
- Data Integration, Processing & Governance
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Implement scalable architectural models for data processing and storage
- Build functionality for data ingestion from multiple heterogeneous sources in batch & realtime mode
- Build functionality for data analytics, search and aggregation
- Overall 5+ years of IT experience with 3+ years in Data related technologies
- Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
- Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
- Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
- Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
- Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
- Performance tuning and optimization of data pipelines
- CI/CD - Infra provisioning on cloud, auto build & deployment pipelines, code quality
- Cloud data specialty and other related Big data technology certifications
- Strong written and verbal communication skills
- Articulation skills
- Good team player
- Selfstarter who requires minimal oversight
- Ability to prioritize and manage multiple tasks
- Process orientation and the ability to define and set up processes
Senior Data Engineer - Gurugram, India - Publicis Sapient
Description
Job Title :Senior Associate L- Data Engineering
Publicis Sapient Overview :
We at Publicis Sapient, enable our clients to thrive in Next and to create business value through expert strategies, customer-centric experience design, and world-class product engineering.
In our 20 + years in IT, never before have we seen such a dire need for transformation in every major industry - from financial services to automotive, consumer products, retail, energy, and travel.
- deeply-skilled, bold, collaborative, flexible
Our people thrive because of the belief that it is both our privilege and responsibility to usher our clients and the world into Next.
- challenging boundaries,
- multidisciplinary collaboration,
- highly agile teams, and
- the power of the newest technologies and platforms.
If that's you, come talk to us
This is the world-class engineering team where you should build your career.
Job Summary :
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution.
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components.
Role & Responsibilities :
Your role is focused on Design, Development and delivery of solutions involving:
Mandatory Experience and Competencies:
# Competency
# Competency