Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
The role
As a research intern, you'll have the opportunity to work inside our research team in pioneering multimodal models built on new model architectures.
Your main responsibility will be to push the quality, efficiency and capabilities of our pretrained models, in collaboration with a variety of machine learning, data and systems engineering stakeholders.
implement new model backbones, architectures and training algorithms,
rapidly run and iterate on experiments and ablations,
build training infrastructure that scales to massive multimodal datasets,
stay up-to-date on new research ideas.
Note: we don't offer any exceptions at all for part-time or remote internships.
What we’re looking for
Comfortable navigating complex machine learning codebases.
Deep machine learning background, including a strong grasp of fundamentals in sequence modeling, generative models and common model architecture families (RNNs, CNNs, Transformers).
Experienced model trainer, ideally previously wrote and pretrained large-scale models.
Proficient in Python and Pytorch (or similar framework) and tensor programming more broadly.
Familiarity with efficiency tradeoffs in designing model architectures for accelerators such as GPUs.
Pursuing an advanced degree in machine learning (PhD).
Comfortable planning out ablations and experiments, organizing results and communicating them clearly to other researchers.
Prior main-track research publications at top-tier venues like ICML, NeurIPS, ICLR or other top-tier area conferences.
[bonus] Prior research experience in advancing state space models or implementing them in practice.
[bonus] Experience in optimizing model inference with CUDA, Triton or other frameworks.
We'd encourage you to apply even if you feel that you don't meet every criteria listed here.
Our culture
🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.
🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality and design along the way.
🤝 We support each other. We have an open and inclusive culture that’s focused on giving everyone the resources they need to succeed.
Our perks
🍽 Lunch, dinner and snacks at the office.
✈️ Relocation assistance.
🦖 Your own personal Yoshi.
Join Cartesia as a PhD Research Intern for Spring/Summer 2025 and step into the exciting world of groundbreaking AI research! In this role, you'll dive deep into pioneering multimodal models that are shaping the future of interactive intelligence. Collaborating within our innovative research team, your main focus will be to enhance the quality, efficiency, and capabilities of our pretrained models. You'll have the chance to implement new model architectures, run insightful experiments, and play a critical role in building training infrastructure for massive multimodal datasets. At Cartesia, we pride ourselves on fostering a supportive and open culture where team members learn from each other every day. Our San Francisco office buzzes with creativity, and we value the importance of execution speed without compromising on quality or design. With your strong background in machine learning, particularly in sequence modeling and generative models, you'll be well-equipped to tackle complex codebases and communicate your findings effectively. We believe in nurturing talent, so whether or not you meet every qualification, we encourage you to apply. Get ready for a fast-paced, collaborative environment where you can flourish as you contribute to cutting-edge AI solutions!
Founded in 1992, Cartesia, Inc. is a group of talented professionals providing custom solutions in the areas of engineering design automation, Web-based applications development, and Microsoft Windows-based software construction and integration. ...
8 jobsSubscribe to Rise newsletter