Taro Logo

Senior Data Integration Engineer

Syndigo enables clients to deliver better eCommerce experiences through accurate data management and integration.
Data
Senior Software Engineer
Hybrid
5+ years of experience
This job posting may no longer be active. You may be interested in these related jobs instead:

Description For Senior Data Integration Engineer

Syndigo is seeking a Senior Data Integration Engineer to join their team in Bangalore, India. This role is crucial in architecting and implementing data ingestion, validation, and transformation pipelines. The ideal candidate will have 5+ years of experience in developing large-scale data pipelines in cloud environments, with expertise in Scala, Python, and SPARK SQL. Experience with Databricks, Azure cloud services, and ETL/ELT patterns is required. The position offers a hybrid work environment and the opportunity to work on innovative analytics solutions. Syndigo values diversity and is committed to creating an inclusive workplace. Join their team to help deliver better eCommerce experiences through accurate data management and integration.

Last updated 10 months ago

Responsibilities For Senior Data Integration Engineer

  • Architect and implement data ingestion, validation, and transformation pipelines
  • Design and maintain batch and streaming integrations across various data domains and platforms
  • Take ownership in building solutions and proposing architectural designs
  • Manage code deployment to various environments
  • Work with stakeholders to define and develop data ingest, validation, and transform pipelines
  • Troubleshoot data pipelines and resolve issues in alignment with SDLC
  • Estimate, track, and communicate status of assigned items to stakeholders

Requirements For Senior Data Integration Engineer

Python
Scala
  • 5+ years of experience in developing and architecting large scale data pipelines in a cloud environment
  • Demonstrated expertise in Scala (Object Oriented Programming) / Python (Scala preferred), SPARK SQL
  • Experience with Databricks, including Delta Lake
  • Experience with Azure and cloud environments, including Azure Data Lake Storage (Gen2), Azure Blob Storage, Azure Tables, Azure SQL Database, Azure Data Factory
  • Experience with ETL/ELT patterns, preferably using Azure Data Factory and Databricks jobs
  • Fundamental knowledge of distributed data processing and storage
  • Fundamental knowledge of working with structured, unstructured, and semi structured data
  • Excellent analytical and problem-solving skills
  • Ability to effectively manage time and adjust to changing priorities
  • Bachelor's degree preferred, but not required

Interested in this job?