Raft is the intelligent logistics platform that's rewriting the technology playbook for freight forwarders and customs brokers in the automation era. A dynamic UK-based technology company with a global impact across logistics, we're searching for a Data Engineer who is excited by the prospect of working in a rapidly growing international scale-up. We have significant runway thanks to our most recent Series B funding, which we raised from some of the best investors in the space: Eight Roads (Alibaba, Spendesk, Toast), Bessemer Venture Partners (LinkedIn, Twilio, Shopify), Episode 1 (Zoopla, Betfair, Shazam) and Dynamo Ventures (Sennder, Stord, Gatik).
As a Data Engineer, you will have a significant impact on both our Engineering and Machine Learning teams, utilizing your experience and subject matter expertise to resolve the data-centric challenges we face. You will focus on building data pipelines, storing and processing data within lakes, warehouses and databases, and make key decisions on our infrastructure, architecture and other solutions pertinent to data.
Day-to-day you will:
- Build, maintain and expand data pipelines to efficiently automate data processing and flexibly collect data from different sources
- Design, setup and maintain databases, data warehouses, data lakes built for our user-facing applications and our internal ML platform
- Build analytics of raw data through dashboards
- Understand the team's main challenges and utilize your knowledge to resolve them, implementing solutions in a team setting from scratch
Requirements:
- Strong proficiency in Python and other programming skills, writing clean, easy to maintain, and scalable code
- Hands-on experience and up-to-date knowledge of recent approaches in data and model versioning, data warehousing, and processing (e.g., BigQuery, DBT, Spark)
- Experience with common Data Engineering tools like Airflow and Airbyte
- Solid experience with NoSQL, SQL databases (e.g., MongoDB, PostgreSQL, Redis)
- Experience with containerization and deployments, including Docker, Kubernetes, Helm, Terraform, Cloud Providers (GCP or others)
- Ability to work with a variety of data types, such as JSON, CSV, Parquet and more
- Creativity and willingness to share ideas over pipeline architecture and wider infrastructure
Apply because you want to:
- Work in a global market and compete with best-in-class companies on the front line of Machine Learning and Engineering developments
- Contribute to a modern Product-led company where your work has real-world impact
- Get exposure to working with stakeholders on a global level across different industries
- Work in a tech, fast-paced, and challenging environment that provides opportunities for professional and personal growth
- Be part of a diverse and multicultural environment