Hadoop Developer||Chennai OR Bangalore|| 4 to 9 Years - Capgemini
Description
Job DescriptionBig Data Developer (3 – 10 Years)Primary Skills
Good understanding of concurrent software systems and building them in a way that is scalable, maintainable, and robustExperience in designing application solutions in hadoop ecosystemDeep understanding of the concepts in Hive, HDFS, yarn, Spark, Spark sql, Scala and PysparkHDFS file formats and their use cases (eg Parquet, ORC, Sequence etc)Good knowledge in data warehousing systemExperienced in any scripting language (SHELL, PYTHON)HortonWorks distribution and understanding on SQL engines (Tez, MR)Java/RestServices/Maven experience is a value additionControl-M development.
Mandatory:
Big Data – Hive, HDFS, Spark, Scala, Pyspark.
Secondary Skills
Good to have :
Schedulers (Control-M), ETL Tool – Dataiku , Unix/Shell scripting , Knowledge of Integration services of FileIT/MQ and others , CI/CD tools – Jenkins, Jira, ADO Devops – tools suite , Oracle.
Good to have Trade Domain knowledge and also any experience on Testing tools.