Data Engineer

Leading visual platform for designing, building, and automating tasks, workflows, apps, and systems without coding skills.
Data
Mid-Level Software Engineer
Hybrid
3+ years of experience
Enterprise SaaS

Description For Data Engineer

Make is revolutionizing the automation landscape as the leading visual platform that enables anyone to design, build, and automate without coding skills. Based in Prague with a global presence, we're seeking a Data Engineer to join our intimate three-person Data Engineering team within the Analytics department. This role combines technical expertise with business acumen, focusing on building robust data pipelines, implementing governance practices, and driving data-driven decisions. The position offers the opportunity to work with modern technologies like AWS, Airflow, and Snowflake while contributing to a rapidly growing company with a diverse, multinational team. We provide comprehensive benefits, including equity, professional development, and work-life balance perks. The ideal candidate will have 3+ years of experience, strong Python and SQL skills, and excellent communication abilities to bridge technical and business perspectives. Join us in shaping the future of automation while working in a collaborative, inclusive environment that values diversity and innovation.

Last updated 3 months ago

Responsibilities For Data Engineer

  • Design, build, and manage robust data pipelines using Airflow
  • Contribute to data platform development with custom plugins and operators
  • Manage and optimize AWS cloud-based infrastructure
  • Implement and manage infrastructure using Terraform
  • Translate business needs into data architecture and workflows
  • Design and maintain scalable data models
  • Build and optimize SQL data models
  • Implement data governance practices
  • Develop data quality checks and validation frameworks
  • Collaborate with cross-functional teams
  • Communicate technical concepts to non-technical stakeholders

Requirements For Data Engineer

Python
  • 3+ years of experience in data engineering roles
  • Proficiency in Python with production-grade coding ability
  • Experience with AWS cloud services and Airflow
  • Expertise in SQL and analytical data warehouses (Snowflake)
  • Strong data modeling skills
  • Deep understanding of data governance practices
  • Experience with data quality frameworks
  • Excellent communication skills
  • Project management abilities
  • Experience working with business teams

Benefits For Data Engineer

Equity
Medical Insurance
Vision Insurance
Dental Insurance
Parental Leave
Education Budget
  • RSUs grant
  • Annual bonus
  • Learning & Development plan
  • 2 Company Learning Days per year
  • 3 Company Impact Days per year
  • Notebook/Macbook and 34" curved monitor
  • 25 days of vacation
  • 4 sick days
  • 10 care days
  • Extended parental leave (3-6 months)
  • RSUs grant for newborn child
  • Life insurance
  • Benefit Plus Cafeteria (including MultiSport Card)
  • Remote working allowance
  • Office snacks and meals
  • Flexible working hours
  • Team buildings and company events

Interested in this job?

Jobs Related To Make Data Engineer

Software Development Engineer, GTC

Software Development Engineer role at Amazon Finance Technology, building enterprise-scale financial data systems with machine learning capabilities.

Data Engineer II, eCS Data Engineering and Analytics

Data Engineer role at Amazon eCS team, building data solutions for eCommerce services with 3+ years experience required, located in Bangalore.

Global Security Operations Center Business Intelligence Engineer II

Business Intelligence Engineer II role at Amazon's Corporate Security division, focusing on security analytics, data visualization, and ETL pipeline development.

Data Engineer, GTMO Product Tech

Data Engineer position at Amazon Business focusing on building large-scale data integration services and solutions for B2B e-commerce platform.

Data Engineer, Ring AI Data Management

Lead data engineer position at Ring AI focusing on data warehouse architecture, ETL pipelines, and ML-based R&D data management