Data Engineer with Spark

MentorMate creates durable technical solutions that deliver digital transformation at scale, providing consulting, design, and engineering services globally.
Asunción, Paraguay
Data
Mid-Level Software Engineer
Hybrid
1,000 - 5,000 Employees
3+ years of experience
Enterprise SaaS
This job posting may no longer be active. You may be interested in these related jobs instead:
Business Intelligence Engineer, ORC (ORC- Operations Risk Compliance) Program Analytics

Business Intelligence Engineer role at Amazon focusing on ORC Analytics, combining statistical analysis, data engineering, and business intelligence expertise in London.

Data Engineer

Data Engineer position at WorldQuant focusing on developing data pipelines and engineering solutions for financial strategies.

Data Engineer

Data Engineer position at G-P, developing solutions for their Global Employment Platform, working with Python, SQL, and modern data technologies in a remote environment.

Data Engineer

Data Engineer position at Capco working on transformative banking projects, requiring SQL and Python expertise, offering comprehensive benefits and growth opportunities.

Mid Level/Senior Data Developer

Mid Level/Senior Data Developer position at CI&T, focusing on Python, PySpark, and AWS for financial sector projects, with remote work options.

Description For Data Engineer with Spark

MentorMate, a global digital transformation consultancy, is seeking a skilled Data Engineer to join their team in Asunción, Paraguay. As part of Tietoevry Create, a team of over 10,000 experts, you'll work on impactful projects for recognizable brands. The role focuses on building and managing data pipelines using Spark and Databricks, working with both internal and third-party data sources.

The ideal candidate will bring 3+ years of data engineering experience, with expertise in Apache Spark and AWS services. You'll be responsible for developing scalable data pipelines, implementing medallion architecture, and ensuring data quality across systems. The position offers the opportunity to work with enterprise software like Salesforce and NetSuite, while utilizing modern AWS tools including S3, Redshift, and Aurora PostgreSQL.

MentorMate provides a dynamic, people-oriented environment where technology experts and leaders bring passion and knowledge to every project. The company values work-life balance, fostering a culture that embraces diversity in interests - from foodies and music buffs to sports enthusiasts. As part of MentorMate's Latin American headquarters, established in 2023, you'll contribute to the company's global technical solutions while enjoying the flexibility of a hybrid work model.

The role demands strong problem-solving abilities and excellent communication skills, as you'll collaborate with both technical and non-technical stakeholders. This position is perfect for a data engineer who wants to grow their career in a global company while working on challenging and meaningful projects that drive digital transformation at scale.

Last updated 22 days ago

Responsibilities For Data Engineer with Spark

  • Develop and maintain scalable data pipelines using Apache Spark
  • Work within Databricks notebooks to run data processing tasks
  • Ingest data from various sources including enterprise software such Salesforce and NetSuite, as well as internal software systems
  • Utilize AWS tools, including S3, Redshift, and Aurora PostgreSQL, to manage and store data
  • Implement a medallion architecture for data organization and processing
  • Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions
  • Maintain and enhance existing data pipelines with a focus on performance and scalability
  • Ensure data quality, integrity, and security across all managed systems

Requirements For Data Engineer with Spark

Python
PostgreSQL
  • 3+ years of experience in Data Engineering
  • Proven experience as a Data Engineer with a strong focus on Apache Spark
  • Experience with AWS services such as S3, Redshift, and Aurora PostgreSQL
  • Strong knowledge of ingesting and processing data from diverse sources
  • Good understanding and practical experience with medallion architecture
  • Experience with modern SDLC, CI/CD and SCM
  • Excellent problem-solving skills and ability to work independently or as part of a team
  • Strong communication skills to collaborate effectively with technical and non-technical stakeholders
  • Experience working with Databricks for data processing (advantage)

Interested in this job?