DataStack Jobs logoBeta
D

Senior Data Engineer


In our mission to take care of every member like they were our own mom or dad, we believe that data is a super power to deliver great service and clinical care. Our Data Engineering team designs and builds the infrastructure to support this capability.

 

In this role, you will write tools to make it easier for our Data Scientists and Machine Learning Engineers to build and deploy scalable, reliable data pipelines and machine learning models. You will help develop and improve a Data Platform built with modern tools like Snowflake, Apache Airflow, dbt, Amundsen, Kubernetes, and Terraform. You will work on a wide range of impactful projects. Examples of our work include a Change-Data-Capture solution to stream data to Snowflake, deploying Airflow on Kubernetes, migrating our SQL pipelines to dbt, and writing a Snowflake SQL formatter and linter. We write most of our tools in Python or Go.

 

You will be joining a team that values using the right tool for the job with an emphasis on performance, reliability, and maintainability. We also believe in continuously evaluating decisions and striving for improvement as a team. Additionally, we value sharing knowledge, eliminating silos, and collaboration.


Responsibilities will include:

  • Design, develop and maintain a robust, scalable data platform
  • Collaborate with Data Science on tooling to develop and deploy reliable data pipelines and products
  • Ensure data availability, quality, and reliability in the data platform
  • Mentor junior engineers on the team and help them level up
  • Identify areas for improvement with the platform
  • Propose new/improved solutions and lead them from design to implementation
  • Develop best practices for data modelling, pipelines, and testing


Attributes to success:

  • Experience coupled with a willingness to learn new things
  • Comfort with ambiguity and a desire to be close to business problems
  • Desire to work in a collaborative environment and improve as a team 
  • Desire to contribute individually with the ability to self-direct work


Desired skills and experience:

  • Ability to work in a fast-paced startup environment
  • 7+ years working as an individual technical contributor
  • 5+ years' experience with databases and data engineering
  • Experience working with Cloud Data Warehouses (Snowflake, BigQuery, Redshift, Firebolt, etc.)
  • Experience developing in Python or Go
  • Experience with a Workflow Orchestrator (Airflow, Luigi, Dagster, Prefect, etc.)
  • A strong understanding of data warehousing and data modelling principles

Company

Devoted Health

Location

United StatesRemote

Job type

Full-Time

Category

Data Engineering

Tags

AirflowSnowflakedbtKubernetes
DataStack Jobs logo

Copyright © 2021

PrivacyTermsGet in touch