Title: ETL Data Warehousing With GCP Location: Bangalore/Chennai/Mumbai/Hyderabad/Pune (Hybrid Role) Contract Type : 12 Months + Extension Notice Period Immediate to 15 Day's Summary JD: Develop EL ELT ETL pipelines to make data available in Big Query analytical data store from disparate batch streaming data sources for the Business Intelligence and Analytics teams Hands on experience in Pyspark needed Data Engineer role Work with on prem data sources Hadoop SQL Server understand the data model business rules behind the data and build data pipelines with GCP Informatica for one or more business verticals This data will be landed in GCP Big Query Build cloud native services and APIs to support and expose data driven solutions Partner closely with our data scientists to ensure the right data is made available in a timely manner to deliver compelling and insightful solutions Design build and launch shared data services to be leveraged by the internal and external partner developer community Building out scalable data pipelines and choosing the right tools for the right job Manage optimize and Monitor data pipelines Provide extensive technical strategic advice and guidance to key stakeholders around data transformation efforts Understand how data is useful to the enterprise Ability to work in a team environment sharing ideas and working collaboratively Strong organizational and analytical skills Excellent interpersonal and written verbal communication skills Act as self-starter with the ability to take on complex projects and perform independent analysis
Job Type: Contractual / Temporary
Contract length: 12 months
Work Location: In person
We regret to inform you that this job opportunity is no longer available