SymphonyRM helps health systems thrive in the rapidly evolving US Healthcare industry by keeping patients healthy and physicians happy. By analyzing large amounts of data from many sources, we empower clients to make smarter decisions at every turn in their business. Our clients love SymphonyRM’s ability to guide them to take the next best action for both patients and physicians.
As a Senior Data Engineer, you’ll play a critical role in developing tools to automate data ETL processing, assess data quality from multiple sources, and build a powerful data pipeline, while working closely with our other engineers.
We care deeply about building long-term careers and offer opportunities for our employees to grow towards project leadership, engineering management, or other roles to make a difference in the lives of patients.
You’d be a great addition if…
You have a Master’s degree in Computer Science, Statistics or similar degree OR Bachelor’s degree with at least 3 year related experience with Python programming OR 5+ years related experience with Python programming (e.g. completion of a Python-focused bootcamp).
You seek to fully understand “big data” problems, and strategize to produce efficient, workable solutions.
You are curious about emerging technologies, and like to evaluate and adapt where you see fit.
You’re motivated by high-impact projects via automation or scaling data operations to drive business value.
You’re excited to work with cross-functional teams in an agile environment.
You appreciate working with people from diverse backgrounds.
Building tools to automate data anomaly detection and alerting
Scaling our data infrastructure and developing software that allows for improved data processing and automation
Bonus qualifications if...
You’ve worked on large-scale databases using cloud computing platforms like Amazon AWS
You have strong knowledge with SQL/Relational databases
You have experience in using data visualization tools (Looker, Matplotlib, Excel, etc.)
Experience with Apache Airflow and/or Pandas
You have studied or have experience designing data pipelines and loading large datasets into databases
What You’ll Do:
You will work closely alongside a small team of engineers to drive continuous improvements to our Python-based data platform.
Write and deploy Python code to automate data ingestion using Apache Airflow.
Support internal and external stakeholders in troubleshooting and resolving issues.
Contribute new ideas in design to development to our data infrastructure - we are always looking to improve.
Due to Covid-19, you will be working remotely for the time being. We are actively interviewing and hiring this position based out of our Palo Alto, CA office.