Data Engineer

Healthcare organization providing comprehensive medical services and healthcare solutions.
Data
Mid-Level Software Engineer
In-Person
Healthcare
This job posting may no longer be active. You may be interested in these related jobs instead:
Business Intelligence Engineer, SCOT Selection

Business Intelligence Engineer position at Amazon, focusing on data analytics and metrics development for supply chain optimization, offering competitive compensation and benefits.

Business Intelligence Engineer, On-Step Comp Engineering, Analytics 'N Science

Business Intelligence Engineer role at Amazon focusing on employee compensation analytics and workforce strategy

Business Dev Engineer I, NatSec & Defense (NSD) Builders

AWS seeks Data Engineer for NSD Builders Team to build cloud-based data pipelines and drive operational efficiency, requiring TS/SCI clearance and strong technical expertise.

Data Engineer

Data Engineer position at Microsoft working on Azure Data platform, building scalable data pipelines and solutions using cutting-edge cloud technologies.

Technical Support Engineer (Data & AI – Azure Database for MySQL, and PostgreSQL)

Technical Support Engineer position at Microsoft focusing on Azure Database services for MySQL and PostgreSQL, combining database expertise with cloud infrastructure support.

Description For Data Engineer

Flagler Health is seeking a skilled Data Engineer to join their engineering team in New York City. This role focuses on building and maintaining data infrastructure using Databricks platform and MongoDB. The ideal candidate will be responsible for developing and optimizing data pipelines, writing efficient Python code, and managing MongoDB databases. They will work with cutting-edge technologies in big data processing and collaborate with cross-functional teams to deliver data solutions.

The position requires expertise in Databricks, Spark, Python programming, and MongoDB, with a strong foundation in data architecture and distributed systems. The role offers opportunities to work on large-scale data processing challenges and implement modern data engineering solutions. The successful candidate will be part of a dynamic team, contributing to the development of robust data infrastructure and automated workflows.

This is an excellent opportunity for a data engineer who is passionate about building scalable data solutions and wants to work with modern technologies. The role combines technical expertise in data engineering with collaborative problem-solving, making it ideal for someone who enjoys both technical challenges and cross-team collaboration. The position offers exposure to various aspects of data engineering, from pipeline development to system optimization, providing a comprehensive experience in the field.

Last updated 3 months ago

Responsibilities For Data Engineer

  • Develop, manage, and optimize data pipelines on the Databricks platform
  • Debug and troubleshoot Spark applications to ensure reliability and performance
  • Implement best practices for Spark compute and optimize workloads
  • Write clean, efficient, and reusable Python code using object-oriented programming principles
  • Design and build APIs to support data integration and application needs
  • Develop scripts and tools to automate data processing and workflows
  • Integrate, query, and manage data within MongoDB
  • Ensure efficient storage and retrieval processes tailored to application requirements
  • Optimize MongoDB performance for large-scale data handling
  • Work closely with data scientists, analysts, and other stakeholders
  • Proactively identify and address technical challenges related to data processing and system design

Requirements For Data Engineer

Python
MongoDB
  • Proven experience working with Databricks and Spark compute
  • Proficient in Python, including object-oriented programming and API development
  • Familiarity with MongoDB, including querying, data modeling, and optimization
  • Strong problem-solving skills and ability to debug and optimize data processing tasks
  • Experience with large-scale data processing and distributed systems
  • Knowledge of other big data technologies like Delta Lake, Hadoop, or Kafka (preferred)
  • Experience with cloud platforms (AWS, Azure, or GCP) (preferred)
  • Familiarity with CI/CD pipelines and version control systems like Git (preferred)
  • Strong understanding of data architecture, ETL processes, and data warehousing concepts (preferred)

Interested in this job?