Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Member of technical staff (Inference) image - Rise Careers
Job details

Member of technical staff (Inference)

About H: H exists to push the boundaries of superintelligence with agentic AI. By automating complex, multi-step tasks typically performed by humans, AI agents will help unlock full human potential.

H is hiring the world’s best AI talent, seeking those who are dedicated as much to building safely and responsibly as to advancing disruptive agentic capabilities. We promote a mindset of openness, learning and collaboration, where everyone has something to contribute.

Holistic, Humanist, Humble.


About the Team: The Inference team develops and enhances the inference stack for serving H-models that power our agent technology. The team focuses on optimizing hardware utilization to reach high throughput, low latency and cost efficiency in order to deliver a seamless user experience.

Role Description:

  • Develop scalable, low-latency and cost effective inference pipelines

  • Optimize model performance: memory usage, throughput, and latency, using advanced techniques like distributed computing, model compression, quantization and caching mechanisms

  • Develop specialized GPU kernels for performance-critical tasks like attention mechanisms, matrix multiplications, etc.

  • Collaborate with H research teams on model architectures to enhance efficiency during inference

  • Review state-of-the-art papers to improve memory usage, throughput and latency (Flash attention, Paged Attention, Continuous batching, etc.)

  • Prioritize and implement state-of-the-art inference techniques

Requirements:

  • Technical skills:

    • MS or PhD in Computer Science, Machine Learning or related fields

    • Proficient in at least one of the following programming languages: Python, Rust or C/C++

    • Experience in GPU programming such as CUDA, Open AI Triton, Metal, etc.

    • Experience in model compression and quantization techniques

  • Soft skills

    • Collaborative mindset, thriving in dynamic, multidisciplinary teams

    • Strong communication and presentation skills

    • Eager to explore new challenges

  • Bonuses:

    • Experience with LLM serving frameworks such as vLLM, TensorRT-LLM, SGLang, llama.cpp, etc.

    • Experience with CUDA kernel programming and NCCL

    • Experience in deep learning inference framework (Pytorch/execuTorch, ONNX Runtime, GGML, etc.)

Location:

  • H's teams are distributed throughout France, the UK, and the US

  • This role has the potential to be fully remote or hybrid for candidates based in cities where we have an office - currently Paris and London

  • The final decision for this will lie with the hiring manager for each individual role

What We Offer:

  • Join the exciting journey of shaping the future of AI, and be part of the early days of one of the hottest AI startups

  • Collaborate with a fun, dynamic and multicultural team, working alongside world-class AI talent in a highly collaborative environment

  • Enjoy a competitive salary

  • Unlock opportunities for professional growth, continuous learning, and career development

If you want to change the status quo in AI, join us.

Average salary estimate

$150000 / YEARLY (est.)
min
max
$120000K
$180000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

What You Should Know About Member of technical staff (Inference), H Company

Are you ready to join a groundbreaking team at H as a Member of Technical Staff (Inference)? At H, we're on a mission to redefine the limits of superintelligence with agentic AI, working tirelessly to automate complex tasks typically performed by humans. The Inference team is crucial to this mission, dedicated to developing and enhancing the inference stack that powers our revolutionary agent technology. In this role, you'll be developing scalable, low-latency inference pipelines and optimizing model performance through advanced techniques, all while collaborating with top-notch research teams. Your expertise in GPU programming and model optimization will be vital as you create specialized GPU kernels for tasks that demand exceptional performance. We’re looking for curious minds with a strong collaborative spirit, and if you have a MS or PhD in Computer Science or Machine Learning, and experience in programming languages like Python or C/C++, you might be the perfect fit. Here at H, every team member contributes to shaping the future of AI in an open and learning-focused environment. We offer a competitive salary and ample opportunities for professional growth. If you’re excited to work alongside world-class talent and tackle the challenges of agentic AI, come join us in transforming the way the world interacts with technology!

Frequently Asked Questions (FAQs) for Member of technical staff (Inference) Role at H Company
What are the responsibilities of a Member of Technical Staff (Inference) at H?

As a Member of Technical Staff (Inference) at H, you will be responsible for developing scalable, low-latency inference pipelines that enhance the performance of our AI models. Your role will involve optimizing model performance by focusing on memory usage, throughput, and latency through techniques like distributed computing and model compression. Additionally, you will create specialized GPU kernels for tasks that demand high performance and collaborate with research teams to implement state-of-the-art inference techniques.

Join Rise to see the full answer
What qualifications do I need to apply for the Member of Technical Staff (Inference) position at H?

To be considered for the Member of Technical Staff (Inference) position at H, you should hold a Master's or PhD in Computer Science, Machine Learning, or a related field. Proficiency in programming languages such as Python, Rust, or C/C++ is essential, along with experience in GPU programming with tools like CUDA and Open AI Triton. Familiarity with model compression and quantization techniques will give you an edge, as well as a collaborative mindset and strong communication skills.

Join Rise to see the full answer
Can I work remotely as a Member of Technical Staff (Inference) at H?

Yes, the position of Member of Technical Staff (Inference) at H offers flexibility for remote or hybrid work arrangements. While our teams are based in France, the UK, and the US, this role can be fully remote or hybrid if you're located in cities where we have offices, such as Paris or London. The final decision regarding the work arrangement will be made by the hiring manager.

Join Rise to see the full answer
What does the team culture look like for a Member of Technical Staff (Inference) at H?

At H, the culture is centered around openness, learning, and collaboration. As a Member of Technical Staff (Inference), you'll be joining a fun and dynamic multidisciplinary team that values every contribution. We believe that each team member has something unique to offer, leading to a culture of continuous learning, exploration of new challenges, and innovative thinking.

Join Rise to see the full answer
What opportunities for growth are available for the Member of Technical Staff (Inference) at H?

H is dedicated to supporting professional growth and career development for our team members. As a Member of Technical Staff (Inference), you'll find ample opportunities for continuous learning through collaboration with world-class AI talent. We encourage exploring various challenges and provide resources for you to enhance your skills and advance your career in the rapidly evolving field of AI.

Join Rise to see the full answer
Common Interview Questions for Member of technical staff (Inference)
How do you optimize the performance of AI models?

When answering this question, highlight your understanding of different optimization techniques such as model compression, quantization, and using distributed computing. Discuss how you might analyze existing models for performance bottlenecks and how you would implement solutions to improve memory usage, throughput, and latency effectively.

Join Rise to see the full answer
Can you explain what GPU programming is and why it is important for inference?

In your response, explain that GPU programming utilizes graphics processing units to handle complex computations required for inference in AI models. It's important because GPUs can significantly accelerate the processing time for tasks like matrix multiplications and model evaluations compared to traditional CPU processing. Reference CUDA as a key framework for this.

Join Rise to see the full answer
What experience do you have with model optimization techniques?

Share specific experiences regarding your work with optimization techniques such as model compression and quantization. Provide examples where you successfully implemented these techniques to reduce model size or increase throughput, highlighting the impact of these changes on application performance.

Join Rise to see the full answer
Describe a complex problem you've solved in a collaborative team environment.

Illustrate a specific scenario where you worked with a multidisciplinary team to tackle a significant technical challenge. Focus on your role in brainstorming solutions and how your contributions, along with your team’s collaboration, led to a successful and effective resolution of the problem.

Join Rise to see the full answer
How do you stay current with new technologies and trends in AI?

Discuss various methods you use to stay informed, such as reading industry publications, following relevant blogs, attending conferences, or participating in online forums. Mention any specific papers or conferences that have inspired your work in AI to reflect your commitment to continual learning.

Join Rise to see the full answer
What is your experience with specific inference frameworks like PyTorch or ONNX Runtime?

Provide insights into your hands-on experience with popular deep learning frameworks. Describe projects where you've implemented these frameworks to develop or optimize inference processes, emphasizing any performance improvements achieved and your understanding of their functionalities.

Join Rise to see the full answer
Discuss a time you implemented a state-of-the-art technique in a project.

Choose a compelling example where you applied a recent advanced technique in AI. Detail your thought process and the outcomes of using this technique, illustrating how it improved the project's results and contributed to achieving its goals.

Join Rise to see the full answer
How do you approach debugging a complex inference pipeline?

Share your systematic approach to debugging by emphasizing the importance of understanding the entire pipeline's architecture. Discuss tools or methods you use for monitoring performance and identifying bottlenecks, then outline how you collaboratively resolve issues that arise.

Join Rise to see the full answer
What programming languages are you proficient in and how do you incorporate them into your work?

Mention your proficiency in programming languages relevant to the role, such as Python, Rust, or C/C++. Describe specific projects where you've effectively utilized these languages to achieve project goals, illustrating your versatility and application of coding skills in AI developments.

Join Rise to see the full answer
What is your overall philosophy regarding collaboration and teamwork in technical environments?

Explain that you value open communication, knowledge sharing, and a willingness to learn from others. Discuss how your collaborative mindset has led to innovative solutions and strengthened team dynamics in past projects, emphasizing the importance of working together in complex technical settings.

Join Rise to see the full answer
Similar Jobs
Photo of the Rise User
H Company Remote No location specified
Posted 6 days ago
Photo of the Rise User
H Company Remote No location specified
Posted 6 days ago
Photo of the Rise User
Posted 3 days ago
Photo of the Rise User
Posted 3 days ago
Posted 8 days ago
Photo of the Rise User
Posted 3 days ago
Photo of the Rise User
Imprint Remote New York, San Francisco, OR Seattle
Posted 7 days ago
Flooid Remote No location specified
Posted 3 days ago
Posted 6 days ago

Hahn & Company is a private equity investment firm specializing in buyouts and corporate restructurings in South Korea. It is one of the largest private equity investment firms operating in North Asia.

11 jobs
MATCH
Calculating your matching score...
FUNDING
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
January 9, 2025

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!