L'Oréal Luxe achieved 57% sales growth for Lancôme's D2C website. How? Read case study to find out
(old) Senior SQL Developer / Data Engineer
Engineering Remote

(old) Senior SQL Developer / Data Engineer

At SegmentStream, we are building a cutting-edge marketing technology that is changing how leading businesses analyse and optimise the performance of their online marketing campaigns. 

Our SaaS platform helps advanced digital marketing teams apply our own sophisticated machine learning algorithms to reveal the true incremental value of each marketing channel and campaign, and automatically apply these AI-insights to close the loop of data-driven marketing.

We’ve proven that our technology works and delivers the best ROAS possible for our enterprise clients across the globe. Check our latest success stories and you’ll understand why our solution is the next big thing in the digital marketing world. 

We are fortunate to be VC-backed by one of the world’s leading startup accelerators - TechStars, as well as some biggest names in the B2B SaaS world, including the founders of PipedriveDynamic Yield, and other great companies. We are also proud to say that SegmentStraem is already trusted by over 50 enterprise customers around the world, including the UK, US, Canada, Australia and lots of European countries. 

To keep developing our product we are now looking for an experienced Data Engineer

Data Warehouses
Machine Learning
Data Mining
Data Transformations
Google BigQuery
Data Pipelines

To evolve our product and scale our business, we are looking for an experienced Senior SQL Developer / Data Engineer who will be integrated into our Engineering team, being responsible for helping maintain and improve the BI and Data Transformation architecture, Machine Learning algorithms, Data Mining processes and delivering scalable and beautiful data solutions within our core product.

What we do

  • We develop high-load data pipelines that allow automating data collection across hundreds of different data sources.
  • We automate data transformation flow using our advanced Workflow Management System based on Kubernetes and Argo (the same stack is used by Tesla, Nvidia, Google, GitHub, and other great companies).
  • We love Clouds. We use Google BigQuery as our primary data warehouse, and our whole infrastructure is based on the Google Cloud Platform.
  • We develop our break-through solutions using Machine Learning to solve complex business problems and invent elegant applications for this technology.
  • We believe in using the right tool for the right task.
  • We are proud of the code we write. We prefer clean, structured, customisable and scalable solutions which fit all clients, versus ad-hoc, fast and dirty solution which is delivered just for one client.
  • We love when our products drive revenue to our customers, being complex under the hood, but very simple for the final user.

You fit us if

  • You can turn complex business requirements into a working product that our customers will love.
  • You are proud of the SQL code that you write, but at the same time remain pragmatic and self-critical.
  • You know when to refactor and when to release.
  • You are inspired by the search for elegant solutions for complex technical problems.
  • You are passionate about Data, AI and ML.
  • You love elegant and scalable solutions and hate dirty ad-hoc work.
  • You are focused, motivated, independent and able to complete the job, no matter how difficult the task.
  • You’re empathetic, patient and happy to help your teammates grow

Examples of future challenges

  • Prepare a design for all metadata according to various ETL processes.
  • Perform troubleshoot on all ETL processes and resolve issues effectively.
  • Analyze all data warehouse architecture goals and identify skill requirement for same.
  • Improve the data schema of our Data Warehouse to make it more scalable and cheaper to process the data.
  • Prepare designs for database systems and recommend improvements for performance.
  • Prepare a data transformation that allows you to migrate from a legacy schema to a new one.
  • Develop various ETL processes and prepare OLAP cubes.
  • Provide support to all data warehouse initiatives.
  • Prepare re-useable and customisable data transformations for visualising user-level metrics like LTV, CAC, Retention Rate, etc.


  • You have a degree in Computer Science, Math, Statistics or similar.
  • Expert SQL knowledge.
  • 5+ years in a Data Warehouse environment with varied forms of data infrastructure, including relational databases, Hadoop, and Column Stores.
  • 5+ years of experience in Python, R, Scala, Java, or similar data processing programming language.
  • Experience with BI reporting tools (Tableau, QlikView, PowerBI, Looker, Google Data Studio…).
  • Experience in Digital Marketing Analytics and Attribution is a huge bonus.
  • Experience in data pipelining technologies and ETL frameworks.
  • Experience with data analysis, processing, and validation.
  • Experience with Machine Learning is a huge bonus.
  • Proven ability to write code that solves real problems.
  • Experience implementing data solutions in public cloud technologies including configuration and deployment.
  • You value teamwork and agree with the statement that “a team is a group of people who are responsible for each other’s decisions”.
  • You write fluently in English without mistakes.

Interested in this position?

Leave your contact details and we'll get in touch in 8 business hours.

Apply now

Get started with SegmentStream

Learn about Conversion Modelling and why it is a true next-generation solution to outdated marketing attribution and conversion tracking tools.