Hadoop Developer||Chennai OR Bangalore|| 4 to 9 Years - Capgemini

    Capgemini
    Capgemini background
    Description
    Job Description

    Big Data Developer (3 – 10 Years)Primary Skills


    Good understanding of concurrent software systems and building them in a way that is scalable, maintainable, and robustExperience in designing application solutions in hadoop ecosystemDeep understanding of the concepts in Hive, HDFS, yarn, Spark, Spark sql, Scala and PysparkHDFS file formats and their use cases (eg Parquet, ORC, Sequence etc)Good knowledge in data warehousing systemExperienced in any scripting language (SHELL, PYTHON)HortonWorks distribution and understanding on SQL engines (Tez, MR)Java/RestServices/Maven experience is a value additionControl-M development.

    Monitoring Resource utilization using Grafana ToolGood skillset to create automation scripts in Jenkins & knowledge in working on automating builds, test frameworks, app configuration etcExperience in implementing scalable applications with fully automated deployment and control using Bitbucket, Jenkins, ADO etcSkillset:

    Mandatory:
    Big Data – Hive, HDFS, Spark, Scala, Pyspark.
    Secondary Skills

    Good to have :

    Schedulers (Control-M), ETL Tool – Dataiku , Unix/Shell scripting , Knowledge of Integration services of FileIT/MQ and others , CI/CD tools – Jenkins, Jira, ADO Devops – tools suite , Oracle.

    Good to have Trade Domain knowledge and also any experience on Testing tools.