Hippocratic AI is building safety-focused large language model (LLM) for the healthcare industry. Our team comprised of ex-researchers from Microsoft, Meta, Nvidia, Apple, Stanford, John Hopkins and HuggingFace are reinventing the next generation of foundation model training and alignment to create AI-powered conversational agents for real time patient-AI interactions.
We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has:
Extensive hands-on experience with state-of-the-art inference optimization techniques
A track record of deploying efficient, scalable LLM systems in production environments
Design and implement multi-node serving architectures for distributed LLM inference
Optimize multi-LoRA serving systems
Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality
Implement speculative decoding and other latency optimization strategies
Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases
Continuously benchmark and improve system performance across various deployment scenarios and GPU types
2+ years of experience optimizing LLM inference systems at scale
Proven expertise with distributed serving architectures for large language models
Hands-on experience implementing quantization techniques for transformer models
Strong understanding of modern inference optimization methods, including:
Speculative decoding techniques with draft models
Eagle speculative decoding approaches
Proficiency in Python and C++
Experience with CUDA programming and GPU optimization (familiarity required, expert-level not necessary)
Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM
Experience with custom CUDA kernels
Track record of deploying inference systems in production environments
Deep understanding of performance optimization systems
Our team is pushing the boundaries of what's possible with LLM deployment. If you're passionate about making state-of-the-art language models more efficient and accessible, we'd love to hear from you!
Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.
Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.
Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.
World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.
For more information, visit www.HippocraticAI.com.
We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description
1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.13313
2. Polaris 2: https://www.hippocraticai.com/polaris2
3. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions
4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai
5. Empathetic Intelligence: https://www.hippocraticai.com/empathetic-intelligence
6. Polaris 1: https://www.hippocraticai.com/research/polaris
7. Research and clinical blogs: https://www.hippocraticai.com/research
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
If you're passionate about the intersection of artificial intelligence and healthcare, Hippocratic AI is the place for you! We're on the lookout for an experienced LLM Inference Engineer to help us build a safe and efficient large language model (LLM) tailored for the healthcare sector. Our team is composed of alumni from esteemed organizations like Microsoft, Meta, and Stanford, all converging to create AI-driven conversational agents that can enhance patient interactions. In this role, you will get your hands dirty optimizing our LLM serving infrastructure, which includes designing multi-node serving architectures and implementing quantization techniques to keep our models running smoothly without losing quality. You will play a key role in applying state-of-the-art optimization strategies and benchmark performance across diverse deployment scenarios. With an emphasis on collaboration, we believe that the best ideas come from in-person teamwork in our Palo Alto office. Join us in creating groundbreaking solutions with a focus on health outcomes that matter. Your expertise in distributed serving architectures and inference optimization will be pivotal as we push the boundaries of LLM technology. Together, we can truly make a difference in how healthcare is delivered globally!
Hippocratic AI is building a safety-focused large language model (LLM) for the healthcare industry. We believe that generative AI has the potential to massively increase healthcare access the world over but has to be built and tested responsibly. ...
89 jobsSubscribe to Rise newsletter