Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Software Engineer - data orchestration image - Rise Careers
Job details

Software Engineer - data orchestration

Overview

We are opening the search for a critical role at Parable, and hiring a Software Engineer focused on Data Orchestration.

This person will play an essential role in building the data infrastructure that transforms how companies understand and optimize their most precious resource - time.

As a key member of our data platform team, you'll design and implement the scalable data orchestration systems that power our AI-driven insights, working directly with our ML and AI Engineering teams to ensure data flows seamlessly throughout our platform.

If you're excited about building sophisticated data systems while working with seasoned entrepreneurs on a mission to make time matter in a world that hijacks our attention, we'd love to talk.

This role is for someone who:

  • Is passionate about building robust, scalable data systems. You're not just a developer - you're an architect who thinks deeply about data flows, pipeline efficiency, and system reliability. You've spent years building data infrastructure, and you're constantly exploring new approaches and technologies.

  • Combines technical excellence with business impact. You can architect complex data orchestration systems and write efficient code, but you never lose sight of what truly matters - enabling Research teams to deliver insights that to customers. You're as comfortable diving deep into technical specifications as you are collaborating with ML engineers to understand their data processing needs.

  • Has deep expertise in data engineering. You understand the intricacies of building reliable data pipelines at scale, with experience in modern data processing frameworks like PySpark and Polars. You have a knack for solving complex data integration challenges and a passion for data quality and integrity.

  • Is a lean experimenter at heart. You believe in shipping to learn, but you also know how to build for scale. You have a track record of delivering results in one-third the time that most competent engineers think possible, not by cutting corners, but through smart architectural decisions and iterative development.

  • Exercises extreme ownership. You take full responsibility for your work, cast no blame, and make no excuses. When issues arise, you're the first to identify solutions rather than point fingers. You see it as your obligation to challenge decisions when you disagree, and seek the scrutiny of your own ideas.

You will be responsible for:

  • Working closely with ML and AI Engineering teams to design, build, and maint orchestration solutions and pipelines at enable ML/AI teams to self serve the development and deployment of data flows at scale.

  • Ensuring data integrity, quality, privacy, security, and accessibility to internal and external clients

  • Participate in developing robust systems for data ingestion, transformation, and delivery across our platform

  • Creating efficient data workflows that optimize for both performance, resource utilization, and AI/ML team usage.

  • Implementing monitoring and observability solutions for data pipelines to ensure reliability

  • Researching and experimenting with new data platform technologies and solutions

  • Establishing best practices for data orchestration and pipeline development

  • Collaborating with cross-functional teams to understand data requirements and deliver solutions

  • Contributing to our infrastructure-as-code practices on Google Cloud Platform

In your first 3 months, you'll:

  • Work with our Data Platform Team + ML Team to build highly-scalable data pipelines, data lakes, and orchestration services

  • Enable the ML and AI Engineering teams to deploy their solutions with reliable and efficient data processing workflows

  • Help lay the groundwork for a scalable and secure data practice

  • Write production-grade code in Python, Rust, and SQL

  • Contribute to our Google Cloud Platform infrastructure using Infrastructure as Code

  • Implement monitoring and alerting for critical data pipelines

  • Experiment rapidly to deliver learnings and results in the first month

  • Help foster a community of technical and professional development

Requirements:

  • 5+ years of experience building enterprise-grade data products and systems

  • Strong expertise in data orchestration frameworks and technologies

  • Demonstrated experience with PySpark, Polars, data lakes, and distributed data processing concepts

  • Proficiency in Python and/or Rust for production pipeline code

  • Experience connecting and integrating external data sources, specifically SaaS APIs

  • Familiarity with cloud platforms, particularly Google Cloud Platform

  • Knowledge of data modeling, schema design, and data governance principles

  • Experience with containerization and infrastructure-as-code

  • Bachelor's degree in Computer Science, Machine Learning, Information Science, or related field preferred

Average salary estimate

$135000 / YEARLY (est.)
min
max
$120000K
$150000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Software Engineer - data orchestration, Parable

Welcome to Parable, where we're on a mission to redefine how organizations harness their most vital resource: time! As a Software Engineer specializing in Data Orchestration, you'll play a crucial role in building robust data infrastructure that will empower our AI-driven insights. Your expertise will directly impact how companies manage and optimize data flows across our innovative platform. You'll work hand-in-hand with our talented ML and AI Engineering teams to ensure seamless data orchestration. If you're passionate about constructing cutting-edge data systems and collaborating with a dynamic team of entrepreneurs focused on making a real difference, we're eager to hear from you! This role suits someone who loves building scalable data systems, is deeply knowledgeable about data engineering, and thrives in a collaborative environment while taking full ownership of their work. In your first few months, you'll have the opportunity to create efficient data workflows, establish best practices, and contribute to our infrastructure-as-code efforts on Google Cloud Platform. If you're excited about diving into the intricacies of data processing and making a meaningful impact, this Software Engineer position at Parable could be the perfect fit for you!

Frequently Asked Questions (FAQs) for Software Engineer - data orchestration Role at Parable
What responsibilities can a Software Engineer - Data Orchestration expect at Parable?

As a Software Engineer focused on Data Orchestration at Parable, your responsibilities will include collaborating closely with ML and AI engineering teams to design and maintain orchestration solutions, ensuring data integrity, quality, and accessibility, and participating in developing robust data ingestion and transformation systems. You'll also implement monitoring solutions for data pipelines and research new technologies to enhance our data infrastructure.

Join Rise to see the full answer
What qualifications are required for the Software Engineer - Data Orchestration role at Parable?

To be considered for the Software Engineer - Data Orchestration role at Parable, candidates should have at least 5 years of experience building enterprise-grade data products, strong expertise in data orchestration frameworks, and proficiency in programming languages like Python and/or Rust. Familiarity with cloud platforms, particularly Google Cloud Platform, and experience with data processing frameworks like PySpark is also essential.

Join Rise to see the full answer
How does the Software Engineer - Data Orchestration support ML and AI teams at Parable?

The Software Engineer in Data Orchestration at Parable plays a vital role in enabling ML and AI teams by designing scalable data pipelines and orchestration services. This role ensures that teams can easily deploy their solutions with reliable data processing workflows, thus empowering them to focus on delivering AI-driven insights effectively.

Join Rise to see the full answer
What key skills are important for a Software Engineer - Data Orchestration at Parable?

Key skills for a Software Engineer - Data Orchestration at Parable include deep knowledge of data engineering principles, proficiency in developing data pipelines, strong coding skills in Python and/or Rust, and experience with cloud technologies, particularly Google Cloud Platform. Additionally, strong analytical and problem-solving skills are essential for navigating complex data integration challenges.

Join Rise to see the full answer
What is the work culture like for a Software Engineer - Data Orchestration at Parable?

At Parable, the work culture is collaborative and innovative. As a Software Engineer in Data Orchestration, you will be part of a team that encourages experimentation and values technical and professional development. We believe in extreme ownership, which means you’ll have the autonomy to make impactful decisions and contribute to our mission of optimizing how businesses use data.

Join Rise to see the full answer
Common Interview Questions for Software Engineer - data orchestration
Can you explain your experience with data orchestration frameworks?

When answering this question, detail the specific frameworks you've worked with, highlighting any notable projects. Emphasize your understanding of how these frameworks facilitate data pipeline management and mention any challenges you've faced and overcome.

Join Rise to see the full answer
How do you ensure data quality and integrity in your projects?

Discuss the practices you implement to verify data accuracy, such as validation checks, automated testing, and monitoring strategies to detect issues early. Provide examples of how these practices have improved data reliability in your previous roles.

Join Rise to see the full answer
What programming languages are you proficient in for building data pipelines?

Clearly mention the programming languages you have expertise in, especially Python and Rust if applicable. Share specific examples of how you've used these languages to streamline data processes, focusing on efficiency and scalability.

Join Rise to see the full answer
Describe a time when you had to troubleshoot a complex data pipeline issue.

Use the STAR technique (Situation, Task, Action, Result) to detail the scenario. Explain your approach to diagnosing the problem, the steps you took to resolve it, and the positive impact your solution had on the project.

Join Rise to see the full answer
What experience do you have with containerization and infrastructure-as-code?

Explain your familiarity with containerization tools like Docker and orchestration platforms such as Kubernetes. Discuss how utilizing infrastructure-as-code practices has streamlined deployment processes in your projects.

Join Rise to see the full answer
How do you approach collaboration with cross-functional teams?

Emphasize the importance of communication and understanding team objectives. Share methods you use to gather data requirements and ensure alignment between technical and non-technical stakeholders to achieve project goals.

Join Rise to see the full answer
Can you provide an example of a successful data model you designed?

Describe a specific project where you designed a data model, focusing on why the design was effective, what challenges it solved, and how it supported operational needs or analytics for the organization.

Join Rise to see the full answer
How do you stay updated on new data technologies and trends?

Discuss your strategies for professional development, such as participating in online courses, attending conferences, and engaging with community forums or technical blogs to stay informed about industry advancements.

Join Rise to see the full answer
What is your experience working with data lakes and distributed processing systems?

Share your experiences with data lakes and the technologies you’ve utilized, such as AWS S3 or Google BigQuery. Highlight how these systems enable efficient handling of large datasets and provide examples of successful implementations.

Join Rise to see the full answer
Have you ever implemented monitoring solutions for data pipelines, and how did you go about it?

Discuss the monitoring tools you’ve used and how you implemented alerting systems to track data integrity and pipeline performance. Share a specific instance where monitoring revealed issues that you resolved proactively.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Posted 7 days ago
Posted 6 days ago
Posted 3 days ago
Photo of the Rise User
Renesas Electronics Remote Bengaluru, Karnataka, India
Posted 14 days ago
Photo of the Rise User
Trinetix Remote No location specified
Posted 14 days ago
Photo of the Rise User
Boeing Hybrid US, Saint Louis County, MO; Missouri, Berkeley, MO
Posted yesterday
MPM Advocacy Hybrid Marlton, New Jersey, United States
Posted 4 days ago
Posted 14 hours ago
Photo of the Rise User
Posted 13 days ago

Since 1985, The Parable Group has helped independent stores buy, promote and sell products using data, technology and marketing expertise in the Christian market. The Parable Group is a leading retail services provider specializing in print and di...

5 jobs
MATCH
VIEW MATCH
FUNDING
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, on-site
DATE POSTED
March 28, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!