No more applications are being accepted for this job
- Azure Data Factory
- Python
- Pyspark
- SQL
- Azure Data Lake
- Around 7 to 9 yrs of IT experience with at least 5 yrs in ETL/Pipeline Development using tools such as Azure Databricks/Apache Spark and Azure Data Factory with development expertise on batch and realtime data integration
- Experience in programming using Python
- RDBMS knowledge and experience in writing the Store Procedures
- Experience in writing bash and Power shell scripting.
- Experience in data ingestion, preparation, integration, and operationalization techniques in optimally addressing the data requirements
- Experience in Cloud data warehouse like Azure Synapse, Snowflake analytical warehouse
- Experience with Orchestration tools, Azure DevOps, and GitHub
- Experience in building end to end architecture for Data Lakes, Data Warehouses and Data Marts
- Experience in relational data processing technology like MS SQL, Delta Lake, Spark SQL, SQL Server
- Experience to own endtoend development, including coding, testing, debugging and deployment
- Extensive knowledge of ETL and Data Warehousing concepts, strategies, methodologies
- Experience working with structured and unstructured data
- Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos
- Ability to provide solutions that are forwardthinking in data and analytics
- Must be team oriented with strong collaboration, prioritization, and adaptability skill
- Excellent written and verbal communication skills including presentation skill
Azure Databricks Engineer - Bengaluru, India - codersbrain
Description
We have an opening for Azure Databricks.Experience : 6 to 12 yrs
Location :
Bangalore
Interview :
Face to face interview on 2nd and 3rd March
Mandatory Skills :
Job Description :