At MedScout, our mission is to empower MedTech commercial teams with the data, insights, and tools they need to deliver life-changing medical innovations to the patients who need them most. We’re creating a best-in-class revenue acceleration platform that unites the latest medical claims intelligence with an intuitive user experience built specifically for sales professionals at medical device and diagnostic companies.
We just closed a $15M Series A and we’re ready to bring on new team members to join our Engineering team. As a Data Engineer, you'll help us build and optimize the data infrastructure that processes billions of healthcare claims, turning complex data into actionable insights that drive business decisions. You'll work alongside our talented engineering team to evolve and scale our data architecture, using modern technologies like Databricks and Elasticsearch, while having significant opportunities to grow technically and drive business impact.
You will design, implement, and maintain scalable data pipelines to process large volumes of healthcare claims data using Databricks, Python, and PySpark. You’ll be ensuring high data quality and performance optimization for downstream analytics.
You will develop processes to integrate multiple data sources, including healthcare claims databases, into a unified data model that powers MedScout's sales enablement platform.
You will work with Product, Customer Success, and Sales leaders to understand what our customers are looking to achieve with our platform and use those insights to inform and validate your thinking as you make design and implementation decisions.
You will collaborate with data scientists and analysts to implement data transformations that support efficiently delivering advanced analytics, market insights, and predictive modeling capabilities for the platform.
You will troubleshoot and resolve complex data pipeline issues, optimize system performance, and contribute to the continuous improvement of MedScript's data infrastructure and engineering practices.
You will optimize workloads and cluster configurations to reduce compute costs while maintaining performance, including implementing auto-scaling policies, right-sizing clusters, and monitoring resource utilization patterns.
You have 3+ years of experience building, maintaining, and operating data pipelines in a modern data warehouse like Databricks, Snowflake, or AWS Redshift.
You feel confident using Python and PySpark.
You have a good understanding of data modeling and schema design, particularly in contexts involving complex relationships and high-volume data processing.
You’re an expert with data quality frameworks, including automated testing, validation, and monitoring of data pipelines.
You have familiarity with modern software development practices including version control (Git), CI/CD, and infrastructure as code.
You are able to work effectively with cross-functional teams, translating business requirements into technical specifications and communicating complex technical concepts to non-technical stakeholders.
At our stage, we believe how you operate is more important than what you’ll do day-to-day. As an early team member, we’re looking for individuals with strong alignment with the following core values.
Effort on our inputs: We prepare diligently, leave it all on the "field", and move on quickly. Focusing on good habits and work ethic, not individual outcomes, ultimately creates a winning culture and a successful company.
Earn Trust: We keep our commitments to our customers, partners, and each other. We listen attentively, speak candidly, and treat others respectfully. We strive to demonstrate empathy, inclusion, and intellectual honesty.
Intelligence Drives Operations: We learn continuously and have the humility to quickly recognize when our assumptions are wrong so we can readjust accordingly.
Hire And Develop The Best: Good players like playing on good teams. We look to raise the bar with every hire and promotion. We work hard to identify and develop high potential.
Take Decisive Action: The only sure path to continuous improvement is a hypothesis-driven approach with a bias for speed of experimentation.
Introductory call with the VP of Engineering.
Technical Review with members of the data team.
A walk through of a product scenario with our Head of Product and Data Lead.
Culture Interviews with both the engineering team and other cross functional team members.
Fully covered healthcare and a great vision, dental, and 401k package.
You will feel heard. You will hear, "Yes, let's do that!" and then have the opportunity to execute your ideas successfully.
Remote first culture and quarterly on-sites with the rest of the MedScout Team.
We stay in nice hotels and eat well when we travel for work. No one feels like a badass walking into a Quality Inn.
Generous budget for learning and development + any tools you feel would make you more effective.
MedScout embraces diversity and equal opportunity in a serious way. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our work will be.
We will ensure that individuals with disabilities are provided reasonable accommodation who need it. We want you to be able to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. If you require any accommodation please let us know!
At MedScout, we're on a mission to empower MedTech commercial teams with the data and insights they need to deliver impactful medical innovations to patients. As a Data Engineer, you'll play a pivotal role in our journey by helping to streamline and enhance our data infrastructure. We process billions of healthcare claims, and your expertise will turn complex data into clear, actionable insights for our business decisions. With a collaborative environment, you'll design and implement scalable data pipelines using modern technologies like Databricks and Python. The data architecture you help evolve will support our sales enablement platform, integrating various data sources into a unified model for better utilization. Along the way, you'll partner with talented data scientists, analysts, and cross-functional teams, ensuring that our data processes are robust and efficient. You won’t just be fixing issues; you’ll be continuously improving our systems and optimizing performance to keep costs manageable. If you have at least 3 years of experience in building data pipelines and familiar tools like Databricks or Snowflake, as well as a solid grounding in Python and data modeling, we’d love for you to bring your skills to our growing team. Join us at MedScout, where your contributions will be valued, your voice will be heard, and together, we'll innovate in MedTech.
Subscribe to Rise newsletter