Big Data Engineer, Hadoop, PySpark, Hive - Chennai, Tamil Nadu
1 month ago

Job summary
Working at Citi is far more than just a job.
- A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you'll have the opportunity to grow your career and make a real impact.
Job description
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.
Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.
Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Access all high-level positions and get the job of your dreams.
Similar jobs
Big Data Engineer, Hadoop, PySpark, Hive
1 month ago
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. · The overall objective of this rol ...
Big Data Engineer, Hadoop, PySpark, Hive
1 month ago
Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements · Analyze information and make evaluative judgements to recommend solutions and improvements · Conduct testing ...
Big Data Engineer, Hadoop, PySpark, Hive
1 month ago
Job summaryThis role involves contributing to applications systems analysis and programming activities by identifying system enhancements , implementing solutions , solving complex issues using business processes and industry standards , conducting testing debugging writing basi ...
Data Engineer
3 days ago
Data engineer with experience in Hadoop and PySpark required. · ...
Data Engineer
1 month ago
Data engineer with experience in PySpark and Cloudera Data Platform required for a position based in Chennai. · ...
Java + Bigdata
3 days ago
+ Job summary · A Big Data Engineer role. · +ResponsibilitiesExperience with Java + Bigdata as minimum required skill. · +QualificationsB.Tech/B.E. in Any Specialization. · +SkillsMicorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark. · + ...
AWS Data Architect
1 month ago
TCS Job summary · TCS hiring for AWS Data Architect role. Design & Implement ETL/data pipeline and Data Lake/ Datawarehouse in AWS.Build infrastructure required for optimal extraction · , transformation, · and loading of data from a wide variety of data sources using SQL · and AW ...
Big Data Developer
3 weeks ago
We are looking for a Big Data Engineer with strong experience in SQL, Hive, ETL pipelines, PySpark and GCP to design and build scalable data solutions for large complex datasets. · Develop and optimize Big Data pipelines using SQL Hive PySpark and ETL frameworks. · Build maintain ...
Urgent Opening For Data Engineering
1 month ago
Data Engineer (DE) Consultant is responsible for designing, developing and maintaining data assets and related products by liaising with multiple stakeholders. · ...
Big Data Developer
3 weeks ago
We are looking for a Big Data Developer with experience in Pyspark/Spark and Scala. · ...
Data Engineer – PySpark
1 month ago
Data Engineer – PySpark Design and implement data architecture, · data strategy, and data roadmaps for large-scale systems. · Migrate traditional data warehousing to Big Data platforms. · Work on data modeling (Star/Snowflake schema) Ensure · data quality metadata management sec ...
Data Engineer
1 week ago
+Job summary · Data Engineer role at the company involves designing, developing and maintaining scalable data pipelines that ensure high data quality and availability across the organization. · +ResponsibilitiesData Pipeline Development: Design, develop, and maintain highly scala ...
AWS Data Architect
1 month ago
TCS is hiring for AWS Data Architect role. · Design & Implement ETL/data pipeline and Data Lake/ Datawarehouse in AWS, · Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS data and an ...
Google Cloud
1 month ago
We are looking for energetic high-performing and highly skilled data engineers to help shape our technology and product roadmap. · Develop and maintain large scale data processing pipeline using PySpark Data Proc Big Query and SQL. · Use Big Query and Data proc to migrate existin ...
Urgent requirement For GCP Big Data engineer
2 weeks ago
We are urgently looking for high-performing GCP Data Engineers to join a critical global data engineering program focused on large-scale marketing and analytics platforms. · Design, develop, and maintain large-scale data pipelines using PySpark, BigQuery SQL & DataProc · Migrate ...
Data Engineer
1 month ago
+We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. · +- Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, · - Im ...
Data Engineering
1 month ago
++Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.+Create the data integration and data diagram documentation. · +Lead the data validation, UAT and regression test for new data asset creation. · ++Familiarity with ...
Big Data Developer
3 weeks ago
We are looking for a Big Data Engineer with strong experience in SQL, Hive, ETL pipelines, PySpark, · and GCP to design and build scalable data solutions for large, · complex datasets.Develop and optimize Big Data pipelines using SQL, · Hive, PySpark, and ETL frameworks. · Build ...
GCP Data Engineer
3 weeks ago
We are looking for energetic, high-performing and highly skilled data engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Global Campaign Tracking (GCT) team under Enterprise PersonalizationPortfolio focused on deliverin ...
TCS offers a space to explore varied technologies in Chennai. · SparkPysparkHiveHBase ...