Data Engineer

Wrisk is reinventing insurance for today's digital consumer, creating a simple, transparent, and personal insurance experience through a mobile-first, frictionless platform.
New Delhi, Delhi, India
Data
Mid-Level Software Engineer
Hybrid
4+ years of experience
Finance
This job posting may no longer be active. You may be interested in these related jobs instead:
Business Intelligence Engineer, ORC (ORC- Operations Risk Compliance) Program Analytics

Business Intelligence Engineer role at Amazon focusing on ORC Analytics, combining statistical analysis, data engineering, and business intelligence expertise in London.

Data Engineer

Data Engineer position at WorldQuant focusing on developing data pipelines and engineering solutions for financial strategies.

Data Engineer

Data Engineer position at G-P, developing solutions for their Global Employment Platform, working with Python, SQL, and modern data technologies in a remote environment.

Data Engineer

Data Engineer position at Capco working on transformative banking projects, requiring SQL and Python expertise, offering comprehensive benefits and growth opportunities.

Mid Level/Senior Data Developer

Mid Level/Senior Data Developer position at CI&T, focusing on Python, PySpark, and AWS for financial sector projects, with remote work options.

Description For Data Engineer

Wrisk is reinventing insurance for today's digital consumer and helping an outdated industry become relevant again. Our mobile-first, frictionless platform lets people interact with their insurance provider with ease, speed, and transparency. We're seeking a skilled Data Engineer to join our dynamic team of analytics professionals. In this role, you'll work closely with our Senior Data Engineer to expand and optimise our AWS-based data architecture, focusing on enhancing data pipelines and ensuring seamless data flow across cross-functional teams.

Key Responsibilities:

  • Develop and maintain efficient, scalable data pipeline architectures within AWS
  • Build and manage large, complex data sets
  • Identify and implement process enhancements, including automation and optimization
  • Develop infrastructure for optimal ETL processes using AWS 'big data' technologies
  • Build tools to deliver actionable insights into key business metrics
  • Collaborate with data scientists and analytics experts to create and optimise tools
  • Work closely with the Senior Data Engineer to enhance system functionality and efficiency

Requirements:

  • Minimum 4 years of experience as a Data Engineer or similar role
  • Advanced knowledge of SQL and relational databases
  • Proven experience with big data pipelines and architectures, particularly in AWS
  • Strong analytical capabilities for working with unstructured datasets
  • Expertise in data transformation, structures, and metadata management
  • Experience with message queuing, stream processing, and scalable big data stores
  • Proficiency in AWS cloud services (EC2, EMR, RDS, Redshift)
  • Experience with relational SQL and NoSQL databases (e.g., Postgres, DynamoDB)
  • Skilled in data pipeline tools like Airflow, Azkaban, and Luigi
  • Strong expertise in programming languages like Python and TypeScript

Join Wrisk to be part of an innovative team changing how insurance is bought, sold, and managed, working with big brand partners to bring our unique customer experience to market.

Last updated 2 months ago

Responsibilities For Data Engineer

  • Develop and maintain data pipeline architectures in AWS
  • Build and manage large, complex data sets
  • Implement process enhancements and automation
  • Develop infrastructure for optimal ETL processes
  • Build tools for actionable insights into business metrics
  • Collaborate with data scientists and analytics experts
  • Work on enhancing system functionality and efficiency

Requirements For Data Engineer

Python
TypeScript
PostgreSQL
  • Minimum 4 years of experience as a Data Engineer or similar role
  • Advanced knowledge of SQL and relational databases
  • Experience with big data pipelines and architectures, particularly in AWS
  • Strong analytical capabilities for unstructured datasets
  • Expertise in data transformation, structures, and metadata management
  • Experience with message queuing and stream processing
  • Proficiency in AWS cloud services (EC2, EMR, RDS, Redshift)
  • Experience with relational SQL and NoSQL databases
  • Skilled in data pipeline tools (Airflow, Azkaban, Luigi)
  • Strong expertise in Python and TypeScript

Interested in this job?