No more applications are being accepted for this job
- Hive, Spark, PySpark, Python, CI/CD, Kafka, MySQL, NoSQL, GCP, Big Query
- Minimum 3 years of experience in Big Data technologies
- Handson experience with the Hadoop stack
- Working knowledge of realtime data pipelines is added advantage.
- Strong experience in at least the programming language Java, Scala, and Python.
- Handson working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc.
- Wellversed and working knowledge with data platformrelated services on AWS
- Bachelor's degree and year of work experience of 6 to 8 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position.
Digi Upaay - Bengaluru, India - Digi Upaay Solutions Pvt. Ltd
Description
Preferred Skills :
Responsibilities :
HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines.