Data Engineer

Healthcare organization providing comprehensive medical services and healthcare solutions.
Data
Mid-Level Software Engineer
In-Person
Healthcare
This job posting may no longer be active. You may be interested in these related jobs instead:
Technical Program Manager, Analytics, Energy and Location Strategy

Technical Program Manager position at Google focusing on analytics and energy strategy, requiring expertise in Python, SQL, and analytical modeling with a salary range of $132K-$189K.

Technical Program Manager III, Capacity Planning Analytics, Google Cloud

Technical Program Manager III position at Google Cloud focusing on capacity planning analytics, requiring 5 years of experience and offering comprehensive compensation package.

Technical Program Manager, Energy Contracts and Asset Management

Technical Program Manager position at Google focusing on energy contracts and asset management, requiring expertise in program management and energy markets.

Product Manager II, Health Data Governance, Fitbit

Product Manager II position at Google's Fitbit division, focusing on health data governance and product development for health tracking devices and services.

Program Manager III, Logistics Planning, Technical Infrastructure

Program Manager III position at Google focusing on logistics planning and forecasting, requiring expertise in SQL, statistical modeling, and supply chain management.

Description For Data Engineer

Flagler Health is seeking a skilled Data Engineer to join their engineering team in New York City. This role focuses on building and maintaining data infrastructure using Databricks platform and MongoDB. The ideal candidate will be responsible for developing and optimizing data pipelines, writing efficient Python code, and managing MongoDB databases. They will work with cutting-edge technologies in big data processing and collaborate with cross-functional teams to deliver data solutions.

The position requires expertise in Databricks, Spark, Python programming, and MongoDB, with a strong foundation in data architecture and distributed systems. The role offers opportunities to work on large-scale data processing challenges and implement modern data engineering solutions. The successful candidate will be part of a dynamic team, contributing to the development of robust data infrastructure and automated workflows.

This is an excellent opportunity for a data engineer who is passionate about building scalable data solutions and wants to work with modern technologies. The role combines technical expertise in data engineering with collaborative problem-solving, making it ideal for someone who enjoys both technical challenges and cross-team collaboration. The position offers exposure to various aspects of data engineering, from pipeline development to system optimization, providing a comprehensive experience in the field.

Last updated 3 months ago

Responsibilities For Data Engineer

  • Develop, manage, and optimize data pipelines on the Databricks platform
  • Debug and troubleshoot Spark applications to ensure reliability and performance
  • Implement best practices for Spark compute and optimize workloads
  • Write clean, efficient, and reusable Python code using object-oriented programming principles
  • Design and build APIs to support data integration and application needs
  • Develop scripts and tools to automate data processing and workflows
  • Integrate, query, and manage data within MongoDB
  • Ensure efficient storage and retrieval processes tailored to application requirements
  • Optimize MongoDB performance for large-scale data handling
  • Work closely with data scientists, analysts, and other stakeholders
  • Proactively identify and address technical challenges related to data processing and system design

Requirements For Data Engineer

Python
MongoDB
  • Proven experience working with Databricks and Spark compute
  • Proficient in Python, including object-oriented programming and API development
  • Familiarity with MongoDB, including querying, data modeling, and optimization
  • Strong problem-solving skills and ability to debug and optimize data processing tasks
  • Experience with large-scale data processing and distributed systems
  • Knowledge of other big data technologies like Delta Lake, Hadoop, or Kafka (preferred)
  • Experience with cloud platforms (AWS, Azure, or GCP) (preferred)
  • Familiarity with CI/CD pipelines and version control systems like Git (preferred)
  • Strong understanding of data architecture, ETL processes, and data warehousing concepts (preferred)

Interested in this job?