Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Founding Engineer - Full Stack image - Rise Careers
Job details

Founding Engineer - Full Stack

About Us:

Mustafa and Varun met at Harvard, where they both did research in the intersection of computation and evaluations. Between them, they have authored multiple published papers in the machine learning domain and hold numerous patents and awards. Drawing on their experiences as tech leads at Snowflake and Lyft, they founded NomadicML to solve a critical industry challenge: bridging the performance gap between model development and production deployment.

At NomadicML, we leverage advanced techniques—such as retrieval-augmented generation, adaptive fine-tuning, and compute-accelerated inference—to significantly improve machine learning models in domains like video generation, healthcare, and autonomous systems. Backed by Pear VC and BAG VC, early investors in Doordash, Affinity, and other top Silicon Valley companies, we’re committed to building cutting-edge infrastructure that helps teams realize the full potential of their ML deployments.

About the Role:

As a Founding Software Engineer, you will build and maintain the end-to-end infrastructure that makes our real-time, continuously adapting ML platform possible. You’ll architect and optimize our data ingestion pipelines—integrating Kafka and Flink for streaming—as well as robust APIs that facilitate seamless communication between front-end interfaces, ML pipelines, and underlying storage systems. By establishing strong observability practices, CI/CD tooling, and highly scalable backend services, you’ll ensure that our platform can handle dynamic loads and growing complexity without sacrificing latency or reliability.

You’ll also collaborate on research-driven experimentation. Working closely with our team, you’ll support the rapid evaluation of new models and techniques. Your backend and full-stack capabilities will create an environment where novel ML approaches can be seamlessly tested, integrated, and iterated upon. Whether it’s spinning up GPU-accelerated instances for fast inference, fine-tuning backend APIs for new embedding strategies, or streamlining data flows for model comparison experiments, your role will be pivotal in turning research insights into production-ready features.

Key Responsibilities:

  • Design and implement scalable ingestion pipelines using Kafka and Flink to handle real-time text, video, and metadata streams.

  • Build and maintain backend APIs that interface smoothly with ML components, front-end dashboards, and storage layers.

  • Integrate observability and CI/CD practices to enable quick iteration, safe rollouts, and immediate feedback loops.

  • Support the research and experimentation of new ML models, ensuring that backend services and APIs can adapt rapidly to novel requirements.

  • Collaborate with ML Engineers to ensure that infrastructure, tooling, and workflows accelerate model evolution and performance tuning.

Must Haves:

  • Strong programming skills (Python / Javascript), and experience building backend APIs and services

  • Prior experience setting up CI/CD pipelines for ML integration

  • Understanding of ML workflow management and scaling model serving infrastructure

Nice to Haves:

  • Familiarity with IaC (Docker, Kubernetes, Terraform) and observability tools (Grafana, Prometheus)

  • Experience integrating with GPU-accelerated platforms for low-latency inference

  • Familiarity with vector databases, embedding stores, and ML serving frameworks

  • Proficiency with distributed systems and streaming platforms (Apache, Confluent)

What We Offer:

  • Competitive compensation and equity

  • Apple Equipment

  • Health, dental, and vision insurance.

  • Opportunity to build foundational machine learning infrastructure from scratch and influence the product’s technical trajectory.

  • Primarily in-person at our San Francisco office with hybrid flexibility.

What You Should Know About Founding Engineer - Full Stack, Pear VC

Are you ready to embark on an exciting journey in the world of machine learning? NomadicML is seeking a talented Founding Engineer - Full Stack to join our innovative team in Austin, TX. Founded by visionaries Mustafa and Varun, who combine their rich experience from leading tech giants with a passion for solving industry challenges, we're on a mission to bridge the performance gap between model development and production deployment. As a Founding Engineer, you will be instrumental in building and maintaining our cutting-edge real-time ML platform. You’ll be responsible for designing scalable data ingestion pipelines with Kafka and Flink, creating smooth backend APIs, and establishing CI/CD practices that allow our platform to thrive under pressure while maintaining reliability. Collaborating closely with our research team, you’ll have the chance to support the evaluation of novel ML models and improve how we handle dynamic loads. Imagine turning innovative research insights into production-ready features, whether it’s deploying GPU-accelerated instances or fine-tuning our backend APIs. We offer a competitive compensation package, great benefits, and the unique opportunity to mold the future of machine learning infrastructure. If you have strong programming skills in Python and Javascript, a knack for building APIs, and a desire to be part of something transformative, NomadicML is the place for you. Come join us and help shape the future of machine learning!

Frequently Asked Questions (FAQs) for Founding Engineer - Full Stack Role at Pear VC
What are the main responsibilities of a Founding Engineer at NomadicML?

As a Founding Engineer at NomadicML, you'll design and implement scalable data ingestion pipelines using Kafka and Flink, build backend APIs that interface with ML components, and integrate observability and CI/CD practices to enable rapid iteration. You'll also support the experimentation of new machine learning models, ensuring infrastructure adapts to innovative requirements.

Join Rise to see the full answer
What qualifications are needed for the Founding Engineer role at NomadicML?

Candidates for the Founding Engineer position at NomadicML should possess strong programming skills in Python and Javascript, along with experience in building backend APIs and services. Additionally, familiarity with CI/CD pipelines for ML integration and understanding ML workflow management are crucial for success in this role.

Join Rise to see the full answer
How does the Founding Engineer contribute to machine learning infrastructure at NomadicML?

The Founding Engineer at NomadicML plays a pivotal role in shaping the machine learning infrastructure. By architecting scalable data pipelines and robust backend APIs, you'll enable seamless integration and high performance of our ML platform, ensuring that innovative research can be turned into reliable production features.

Join Rise to see the full answer
What technologies should a candidate be familiar with for the Founding Engineer position at NomadicML?

Candidates should be familiar with technologies such as Docker, Kubernetes, and Terraform, as well as observability tools like Grafana and Prometheus. Experience integrating with GPU-accelerated platforms for low-latency inference, and knowledge of distributed systems and streaming platforms like Apache and Confluent, are beneficial as well.

Join Rise to see the full answer
What does NomadicML provide for the Founding Engineer role?

NomadicML offers competitive compensation and equity packages, Apple equipment, and comprehensive health benefits. More importantly, as a Founding Engineer, you'll have the unique opportunity to build foundational machine learning infrastructure from scratch and influence the technical direction of the product.

Join Rise to see the full answer
Common Interview Questions for Founding Engineer - Full Stack
Can you describe your experience with building scalable data ingestion pipelines?

In your response, focus on specific projects where you used technologies like Kafka and Flink. Discuss the challenges you faced, how you addressed them, and the overall impact on the performance of the systems you worked on.

Join Rise to see the full answer
How do you ensure that your backend APIs remain efficient and reliable?

Share examples of best practices you follow, such as implementing observability tools, using rate limiting, and conducting load testing. Discuss how these practices help maintain API performance under heavy load.

Join Rise to see the full answer
What is your approach to continuous integration and continuous deployment in machine learning environments?

Explain your methods for setting up CI/CD pipelines specifically for ML workflows. Combine technical detail with how this process models safe rollouts and quick iterations, enhancing productivity and reliability.

Join Rise to see the full answer
Tell us about a time you supported research-driven experimentation in machine learning.

Provide a specific example of how you collaborated with data scientists or research teams, detailing your contributions and how the results influenced the model's performance or capabilities.

Join Rise to see the full answer
What experience do you have with different machine learning frameworks and platforms?

Discuss your familiarity with various ML frameworks, any foundational projects you’ve completed, and how you choose the right tools for specific ML tasks. Mention any experiences working with model serving frameworks.

Join Rise to see the full answer
How have you handled the challenges of scaling backend infrastructure?

Share strategies you've employed to scale backend services, such as horizontal scaling techniques or microservice architecture. Discuss how these approaches improved system reliability and performance.

Join Rise to see the full answer
Describe a situation where you had to work under pressure to meet tight deadlines.

Provide an example relating to a project you've worked on, detailing the steps you took to manage your time, prioritize tasks, and how you ensured quality despite the deadline challenges.

Join Rise to see the full answer
What role do you think observability tools play in infrastructure management?

Discuss your perspective on observability tools and their impact on understanding system performance, troubleshooting issues, and ensuring seamless operations in a production environment.

Join Rise to see the full answer
Can you explain the importance of versioning in API development?

Share the reasons for implementing API versioning, including how it helps maintain compatibility with older clients while allowing for continuous improvement and updates to the API.

Join Rise to see the full answer
How do you stay current with advancements in machine learning and full-stack development?

Elaborate on your methods for staying updated, such as attending workshops, following research publications, participating in online forums, and coding personal projects that implement the latest advancements.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Posted 7 days ago
Photo of the Rise User
Posted 7 days ago
Photo of the Rise User
Mindex Hybrid No location specified
Posted 13 days ago
Photo of the Rise User
Posted 13 days ago
Posted 11 days ago
Photo of the Rise User
Posted 5 days ago

Pear Accelerator is the best program for pre-seed and seed-stage founders to launch iconic companies from the ground up. We deliberately keep the program "small batch" to maximize the attention each founder gets from our partners. Our companies ...

63 jobs
MATCH
Calculating your matching score...
FUNDING
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, hybrid
DATE POSTED
December 17, 2024

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!