Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Founding Full Stack Engineer - NomadicML image - Rise Careers
Job details

Founding Full Stack Engineer - NomadicML

About Us:

Mustafa and Varun met at Harvard, where they both did research in the intersection of computation and evaluations. Between them, they have authored multiple published papers in the machine learning domain and hold numerous patents and awards. Drawing on their experiences as tech leads at Snowflake and Lyft, they founded NomadicML to solve a critical industry challenge: bridging the performance gap between model development and production deployment.

At NomadicML, we leverage advanced techniques—such as retrieval-augmented generation, adaptive fine-tuning, and compute-accelerated inference—to significantly improve machine learning models in domains like video generation, healthcare, and autonomous systems. Backed by Pear VC and BAG VC, early investors in Doordash, Affinity, and other top Silicon Valley companies, we’re committed to building cutting-edge infrastructure that helps teams realize the full potential of their ML deployments.

About the Role:

As a Founding Software Engineer, you will build and maintain the end-to-end infrastructure that makes our real-time, continuously adapting ML platform possible. You’ll architect and optimize our data ingestion pipelines—integrating Kafka and Flink for streaming—as well as robust APIs that facilitate seamless communication between front-end interfaces, ML pipelines, and underlying storage systems. By establishing strong observability practices, CI/CD tooling, and highly scalable backend services, you’ll ensure that our platform can handle dynamic loads and growing complexity without sacrificing latency or reliability.

You’ll also collaborate on research-driven experimentation. Working closely with our team, you’ll support the rapid evaluation of new models and techniques. Your backend and full-stack capabilities will create an environment where novel ML approaches can be seamlessly tested, integrated, and iterated upon. Whether it’s spinning up GPU-accelerated instances for fast inference, fine-tuning backend APIs for new embedding strategies, or streamlining data flows for model comparison experiments, your role will be pivotal in turning research insights into production-ready features.

Key Responsibilities:

  • Design and implement scalable ingestion pipelines using Kafka and Flink to handle real-time text, video, and metadata streams.

  • Build and maintain backend APIs that interface smoothly with ML components, front-end dashboards, and storage layers.

  • Integrate observability and CI/CD practices to enable quick iteration, safe rollouts, and immediate feedback loops.

  • Support the research and experimentation of new ML models, ensuring that backend services and APIs can adapt rapidly to novel requirements.

  • Collaborate with ML Engineers to ensure that infrastructure, tooling, and workflows accelerate model evolution and performance tuning.

Must Haves:

  • Strong programming skills (Python / Javascript), and experience building backend APIs and services

  • Prior experience setting up CI/CD pipelines for ML integration

  • Understanding of ML workflow management and scaling model serving infrastructure

Nice to Haves:

  • Familiarity with IaC (Docker, Kubernetes, Terraform) and observability tools (Grafana, Prometheus)

  • Experience integrating with GPU-accelerated platforms for low-latency inference

  • Familiarity with vector databases, embedding stores, and ML serving frameworks

  • Proficiency with distributed systems and streaming platforms (Apache, Confluent)

What We Offer:

  • Competitive compensation and equity

  • Apple Equipment

  • Health, dental, and vision insurance.

  • Opportunity to build foundational machine learning infrastructure from scratch and influence the product’s technical trajectory.

  • Primarily in-person at our San Francisco office with hybrid flexibility.

Average salary estimate

$140000 / YEARLY (est.)
min
max
$120000K
$160000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Founding Full Stack Engineer - NomadicML, Pear VC

At NomadicML, we're on the cutting edge of machine learning technology, and we're looking for a Founding Full Stack Engineer to join our dynamic team in Austin, TX. Founded by academic innovators with rich backgrounds at top tech companies like Snowflake and Lyft, NomadicML aims to bridge the gap between model development and real-world applications. As our Founding Software Engineer, you will play a crucial role in building and maintaining the infrastructure that powers our continuously evolving ML platform. Imagine designing robust ingestion pipelines using Kafka and Flink to manage real-time data streams, and crafting backend APIs that connect seamlessly to front-end interfaces and backend storage. You’ll also implement CI/CD practices that enable rapid experimentation and rollouts, facilitating a culture of innovation and agility. This position offers you the unique opportunity to collaborate with top-tier talent in the field as you drive the evolution of our machine learning infrastructure. Whether you’re optimizing data flows for advanced model comparisons or establishing observability practices to enhance performance, your expertise will be vital in bringing groundbreaking ML solutions to life. Join us, and help set the stage for the future of machine learning as we advance into exciting new domains such as video generation and autonomous systems.

Frequently Asked Questions (FAQs) for Founding Full Stack Engineer - NomadicML Role at Pear VC
What are the key responsibilities of a Founding Full Stack Engineer at NomadicML?

As a Founding Full Stack Engineer at NomadicML, your key responsibilities include designing and implementing scalable ingestion pipelines utilizing Kafka and Flink, building robust backend APIs, integrating observability and CI/CD practices, and collaborating closely with ML engineers to ensure infrastructure supports rapid model experimentation. You’ll set the foundation for innovative machine learning applications, making an impactful contribution to the company’s vision.

Join Rise to see the full answer
What programming skills are required for the Founding Full Stack Engineer position at NomadicML?

The Founding Full Stack Engineer position at NomadicML requires strong programming skills in Python and JavaScript, complemented by prior experience in building backend APIs and services. Familiarity with continuous integration and deployment pipelines for machine learning is also critical, as this will enhance your ability to contribute to rapid model iterations and deployment.

Join Rise to see the full answer
What technology stack will a Founding Full Stack Engineer work with at NomadicML?

In the role of Founding Full Stack Engineer at NomadicML, you will work with an exciting technology stack that includes tools like Kafka and Flink for data streaming, Docker and Kubernetes for infrastructure management, and observability tools like Grafana and Prometheus. This diverse stack will empower you to build and maintain the backend infrastructure crucial for performance tuning in machine learning.

Join Rise to see the full answer
What qualifications are nice to have for the Founding Full Stack Engineer role at NomadicML?

While a strong background in Python and JavaScript is essential for the Founding Full Stack Engineer role at NomadicML, nice-to-have qualifications include familiarity with infrastructure as code (IaC) using Docker and Terraform, experience with GPU-accelerated platforms for low-latency inference, and knowledge in utilizing vector databases and ML serving frameworks. These skills can enhance your impact on the team.

Join Rise to see the full answer
What opportunities for growth does the Founding Full Stack Engineer position at NomadicML offer?

The Founding Full Stack Engineer position at NomadicML offers substantial opportunities for professional growth. You will be at the forefront of building foundational machine learning infrastructure, allowing you to shape the product’s technical trajectory. Collaborating with experts in the field and engaging in innovative projects ensures your skills will expand as you contribute to cutting-edge technologies.

Join Rise to see the full answer
Common Interview Questions for Founding Full Stack Engineer - NomadicML
Can you describe your experience with building backend APIs?

When discussing your experience with backend APIs, highlight specific projects where you've designed and implemented APIs, focusing on the technology stack you've used, such as RESTful or GraphQL. Mention the challenges faced, how you ensured smooth integration with other systems, and any metrics that demonstrate the performance of these APIs.

Join Rise to see the full answer
What strategies do you employ for ensuring system scalability?

In answering this question, share strategies such as load balancing, using microservices architecture, and implementing caching solutions. Provide examples of how you have previously optimized systems for scalability, discussing any tools you've employed—like Kubernetes for orchestration, or scaling databases—and the positive outcomes that resulted.

Join Rise to see the full answer
How do you approach CI/CD in machine learning projects?

Discuss the importance of CI/CD in ML and share your experience with tools and frameworks like Jenkins or GitHub Actions. Elaborate on how you implement automated testing, model versioning, and deployment strategies to ensure smooth updates while minimizing downtime and risks.

Join Rise to see the full answer
Can you give an example of a real-time data processing project you've worked on?

Share a detailed example focusing on your role in the project, the technologies used (like Kafka, Flink, or other streaming platforms), and the challenges encountered in processing and analyzing data in real-time. Emphasize learned lessons and the impact of your work on the project outcome.

Join Rise to see the full answer
What is your experience with observability tools?

While answering, discuss specific observability tools you've utilized, such as Grafana or Prometheus. Explain their importance in maintaining system health, how you've implemented monitoring solutions, and any situations where observability led you to easily diagnose and resolve issues quickly.

Join Rise to see the full answer
How do you keep yourself updated with new technologies in full stack development?

Share your methods for staying current, which may include online courses, attending tech meetups, and following key influencers in the field through blogs and social media. Mention any recent technologies or frameworks you've learned and how they've enhanced your work.

Join Rise to see the full answer
What role does collaboration play in your software engineering process?

Talk about how collaboration is vital in your workflow. Discuss your experiences working within diverse teams, the tools that facilitate this collaboration (like Git or Slack), and how you leverage peer reviews to enhance code quality and foster a culture of knowledge-sharing.

Join Rise to see the full answer
Can you explain your understanding of ML workflow management?

Speak about your familiarity with ML lifecycle management tools and how you've contributed to managing the workflow, potentially mentioning platforms like MLflow or Kubeflow. Describe how you facilitate coordination among data scientists and engineers to ensure smooth transitions from model development to production.

Join Rise to see the full answer
How would you troubleshoot a performance issue in a microservices architecture?

Explain your troubleshooting approach, which might include examining service logs, monitoring performance metrics, and using profilers to identify bottlenecks. Walk the interviewer through the systematic steps you would take to isolate the problem and propose effective solutions.

Join Rise to see the full answer
What has been your experience with Docker and Kubernetes?

Mention specific projects where you've applied Docker for containerization or Kubernetes for orchestration. Highlight how these tools contributed to your success in deploying scalable applications, the benefits realized through their use, and any challenges faced during implementation.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Posted 7 days ago
Photo of the Rise User
Posted 2 days ago
Posted 8 days ago
Photo of the Rise User
Posted 14 hours ago
Photo of the Rise User
Posted 6 days ago
Photo of the Rise User
Posted 4 days ago
Photo of the Rise User
Posted 10 days ago

Pear Accelerator is the best program for pre-seed and seed-stage founders to launch iconic companies from the ground up. We deliberately keep the program "small batch" to maximize the attention each founder gets from our partners. Our companies ...

63 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, hybrid
DATE POSTED
December 18, 2024

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!