- Experience in the database, data integration, and data migration
- Good experience (3+ years) in development using databases such as Oracle, MS SQL Server, Teradata, etc.
- Minimum of 2+ years of handson experience with Amazon Redshift.
- Strong knowledge of the AWS environment and service knowledge with S3 storage understanding
- Experience with the integration of data from multiple data sources.
- Knowledge of various ETL techniques and frameworks.
- Proficient in working with ETL tools such as Pentaho, ODI, or similar.
- Handson experience in data warehouse (DW) and BI tools is mandatory.
- Ability to write complex stored procedures and functions on multiple databases.
- Extensive knowledge of SQL and NoSQL programming skills for database performance tuning
- Good knowledge of scripting languages such as Python and ShellScript
- Strong knowledge of multiple cloud technologies, including VPC, EC2, S3, Amazon API Gateway, DynamoDB, SimpleDB, and AWS Route 53.
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
- Experience with Hadoop v2 and Spark.
- Experience in realtime streaming data processing with Spark/Kafka/Kinesis.
- Management of the Hadoop cluster, with all included services and ability to solve any ongoing issues with operating the cluster.
- Experience with NoSQL databases, such as HBase, Cassandra, and MongoDB.
- Interface with onsite/offshore team to gather, understand, and translate requirements related to ETL and Big Data Processing into the delivery process.
- Work closely with other ETL Engineers to provide technical requirements and implementation details.
- Develop ETL mappings and scripts.
- Deploy, Run, and Debug the ETL system running the config scripts built by the ETL Engineering team.
- Ensure continuous improvement in data engineering excellence by implementing best practices for coding, testing, and deploying.
-
Big Data Engineer
Found in: Talent IN 2A C2 - 4 days ago
ATech Rajasthan/Karnataka/Metros/Jaipur/Udaipur/Bangalore/Raipur, India permanentHiring For Big Data Engineer (Spark, Scala, AWS) · Job description : · - Spark, SCALA, AWS OR Spark, python with AWS lambda exposure. · - Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. · - Build, depl ...
ETL Developer - Jaipur/Udaipur/Rajasthan/Karnataka/Bangalore/Mysore, India - ATech
Found in: Talent IN 2A C2 - 2 days ago
Description
Job :
ETL Developer
Skills Required :
Roles & Responsibilities :