Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Member of Technical Staff - Training Infrastructure Engineer image - Rise Careers
Job details

Member of Technical Staff - Training Infrastructure Engineer

Liquid AI, an MIT spin-off, is a foundation model company headquartered in Boston, Massachusetts. Our mission is to build capable and efficient general-purpose AI systems at every scale.


Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will get meaningfully, reliably and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone.


What This Role Is

 We're looking for a Training Infrastructure Engineer to design, build, and optimize the distributed systems that power our Liquid Foundation Models (LFMs). This is a highly technical role focused on creating the scalable infrastructure that enables efficient training of models across the spectrum—from compact specialized models to massive multimodal systems—while maximizing hardware utilization and minimizing training time.


You're A Great Fit If
  • You have extensive experience building distributed training infrastructure for language and multimodal models, with hands-on expertise in frameworks like PyTorch Distributed, DeepSpeed, or Megatron-LM
  • You're passionate about solving complex systems challenges in large-scale model training—from efficient multimodal data loading to sophisticated sharding strategies to robust checkpointing mechanisms
  • You have a deep understanding of hardware accelerators  and networking topologies, with the ability to optimize communication patterns for different parallelism strategies
  • You're skilled at identifying and resolving performance bottlenecks in training pipelines, whether they occur in data loading, computation, or communication between nodes
  • You have experience working with diverse data types (text, images, video, audio) and can build data pipelines that handle heterogeneous inputs efficiently


What Sets You Apart
  • You've implemented custom sharding techniques (tensor/pipeline/data parallelism) to scale training across distributed GPU clusters of varying sizes
  • You have experience optimizing data pipelines for multimodal datasets with sophisticated preprocessing requirements
  • You've built fault-tolerant checkpointing systems that can handle complex model states while minimizing training interruptions
  • You've contributed to open-source training infrastructure projects or frameworks
  • You've designed training infrastructure that works efficiently for both parameter-efficient specialized models and massive multimodal systems


What You'll Actually Do
  • Design and implement high-performance, scalable training infrastructure that efficiently utilizes our GPU clusters for both specialized and large-scale multimodal models
  • Build robust data loading systems that eliminate I/O bottlenecks and enable training on diverse multimodal datasets
  • Develop sophisticated checkpointing mechanisms that balance memory constraints with recovery needs across different model scales
  • Optimize communication patterns between nodes to minimize the overhead of distributed training for long-running experiments
  • Collaborate with ML engineers to implement new model architectures and training algorithms at scale
  • Create monitoring and debugging tools to ensure training stability and resource efficiency across our infrastructure


What You'll Gain
  • The opportunity to solve some of the hardest systems challenges in AI, working at the intersection of distributed systems and cutting-edge multimodal machine learning
  • Experience building infrastructure that powers the next generation of foundation models across the full spectrum of model scales
  • The satisfaction of seeing your work directly enable breakthroughs in model capabilities and performance


Average salary estimate

$140000 / YEARLY (est.)
min
max
$120000K
$160000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Member of Technical Staff - Training Infrastructure Engineer, Liquid AI

At Liquid AI, an innovative MIT spin-off based in Boston, we are on a mission to revolutionize AI technology, and we’re looking for a talented Member of Technical Staff - Training Infrastructure Engineer to join our team. In this pivotal role, you will design, build, and optimize the intricate distributed systems that power our groundbreaking Liquid Foundation Models (LFMs). Your talents will be crucial as you tackle the exciting challenges of creating scalable infrastructure that facilitates efficient training of various models—from compact specialized ones to expansive multimodal systems. If you're passionate about maximizing hardware utilization and minimizing training times while embracing the complexities of systems challenges, this could be the perfect fit for you. You will collaborate with other ML engineers, diving deep into efficient multimodal data loading, sharding strategies, and robust checkpointing mechanisms. With your expertise in frameworks like PyTorch Distributed and DeepSpeed, you’ll enhance our training infrastructure and ensure our AI solutions are powerfully integrated into enterprises worldwide. By joining Liquid AI, you’ll not only work on cutting-edge technology but also gain invaluable experience that will shape the future of AI development and deployment. Let's build the future together!

Frequently Asked Questions (FAQs) for Member of Technical Staff - Training Infrastructure Engineer Role at Liquid AI
What are the primary responsibilities of a Member of Technical Staff - Training Infrastructure Engineer at Liquid AI?

As a Member of Technical Staff - Training Infrastructure Engineer at Liquid AI, your key responsibilities include designing and implementing high-performance, scalable training infrastructure, building data loading systems to eliminate bottlenecks, and developing efficient checkpointing mechanisms for diverse model scales. You will also optimize communication patterns between nodes, collaborate with ML engineers, and create tools for monitoring and debugging training stability and resource efficiency.

Join Rise to see the full answer
What qualifications are necessary for the Member of Technical Staff - Training Infrastructure Engineer role at Liquid AI?

To qualify for the Member of Technical Staff - Training Infrastructure Engineer position at Liquid AI, candidates need extensive experience in building distributed training infrastructure for language and multimodal models, hands-on expertise with frameworks like PyTorch Distributed and DeepSpeed, and a deep understanding of hardware accelerators. Proficiency in optimizing communication patterns and resolving performance bottlenecks is also essential.

Join Rise to see the full answer
How does the Member of Technical Staff - Training Infrastructure Engineer contribute to AI at Liquid AI?

In the role of Member of Technical Staff - Training Infrastructure Engineer, you will contribute to AI at Liquid AI by creating scalable training infrastructure that enhances the performance of Liquid Foundation Models (LFMs). Your work will directly impact the efficiency and capabilities of AI solutions, enabling breakthroughs in model training and deployment that ensure AI technology is integrated across various enterprises effectively.

Join Rise to see the full answer
What skills are considered valuable for the Member of Technical Staff - Training Infrastructure Engineer position at Liquid AI?

Key skills for the Member of Technical Staff - Training Infrastructure Engineer at Liquid AI include expertise in custom sharding techniques, experience optimizing multimodal data pipelines, background in building fault-tolerant checkpointing systems, and a track record of contributing to open-source projects. These skills are crucial for developing robust and efficient training infrastructure.

Join Rise to see the full answer
What kind of projects would the Member of Technical Staff - Training Infrastructure Engineer be working on at Liquid AI?

As a Member of Technical Staff - Training Infrastructure Engineer at Liquid AI, you will work on challenging projects that focus on designing and optimizing high-performance training infrastructure for specialized and multimodal models. Your projects will include developing advanced data loading systems, checkpointing mechanisms, and collaboration with ML engineers to implement innovative model architectures, pushing the boundaries of AI technology.

Join Rise to see the full answer
Common Interview Questions for Member of Technical Staff - Training Infrastructure Engineer
Can you explain your experience with distributed training infrastructure?

When responding to this question, highlight specific projects where you utilized frameworks like PyTorch Distributed or DeepSpeed. Discuss the challenges you faced, such as optimizing hardware utilization and minimizing communication overhead, and explain how you implemented solutions that improved overall training efficiency.

Join Rise to see the full answer
What techniques have you used to optimize data pipelines for training models?

Talk about your experiences with diverse data types and how you built data pipelines that efficiently manage both input and output. Discuss any custom implementations for sharding and preprocessing techniques that have improved data loading speeds and reduced bottlenecks in the training process.

Join Rise to see the full answer
How do you approach performance bottlenecks in training pipelines?

Explain your systematic approach to identifying the root causes of performance bottlenecks. Share examples from your work where you used profiling tools or logs to diagnose problems in data loading, computation, or communication, followed by the strategies you employed to resolve them.

Join Rise to see the full answer
What is your experience with checkpointing mechanisms in model training?

Describe your experience with designing fault-tolerant checkpointing systems. Consider discussing specific strategies you’ve implemented to minimize training interruptions while ensuring memory constraints are balanced, and provide examples of complex model states you managed effectively.

Join Rise to see the full answer
How do you ensure efficient use of hardware accelerators in your training infrastructure?

Provide details on how you assess and optimize the architecture of training infrastructure to enhance the performance of hardware accelerators. Discuss any techniques you have implemented that improved communication patterns and resource allocation across GPU clusters.

Join Rise to see the full answer
Can you give an example of working collaboratively with ML engineers?

Share a specific experience where you collaborated with ML engineers to implement new model architectures. Discuss how your insights on the training infrastructure contributed to the model's success and the overall efficiency of the training process.

Join Rise to see the full answer
What do you consider the biggest challenge in training infrastructure today?

Reflect on current industry trends and challenges you see, such as managing complexity in multimodal datasets or achieving efficiency at scale. Share your perspective on how innovative approaches can overcome these challenges, and how you’ve contributed to addressing them in your previous roles.

Join Rise to see the full answer
Describe a time when you contributed to an open-source project related to training infrastructure.

Discuss the open-source project you contributed to, detailing your specific role and what you worked on. Highlight how your contributions improved the project's functionality or usability and what you learned from collaborating with a broader community of developers.

Join Rise to see the full answer
What strategies do you use for monitoring and debugging training systems?

Talk about the tools and techniques you use to monitor system performance and debug issues effectively. Mention specific instances where you had to analyze system behavior, identify anomalies, and implement fixes to ensure stable and efficient training processes.

Join Rise to see the full answer
How important is communication in your role as a Training Infrastructure Engineer?

Emphasize the significance of communication in collaborating with diverse teams, such as ML engineers, software developers, and researchers. Share how clear communication has helped you bridge gaps in understanding technical concepts and facilitated successful project outcomes.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
Posted 4 hours ago
Photo of the Rise User
Posted 10 days ago
Photo of the Rise User
Posted 6 days ago
Photo of the Rise User
Lead Bank Hybrid Kansas City/ New York/ San Francisco/ Sunnyvale
Posted 3 days ago
The Warehouse Group Remote 26 The Warehouse Way, Northcote, Auckland 0627, New Zealand
Posted 3 days ago
Photo of the Rise User
Posted 8 days ago
Photo of the Rise User
American Iron and Metal Hybrid 2144 W McDowell Rd, Phoenix, AZ 85009, USA
Posted 4 days ago
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
No info
HQ LOCATION
No info
EMPLOYMENT TYPE
Full-time, on-site
DATE POSTED
March 20, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!