Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
Role responsibilities
Your main responsibility will be to push the quality, efficiency and capabilities of our pretrained models, in collaboration with a variety of machine learning, data and systems engineering stakeholders.
implement new model backbones, architectures and training algorithms,
rapidly run and iterate on experiments and ablations,
build training infrastructure that scales to massive multimodal datasets,
stay up-to-date on new research ideas.
What we’re looking for
Given the scale and difficulty of problems we work on, we value strong engineering skills at Cartesia.
Strong engineering skills, comfortable navigating complex codebases and monorepos.
Deep machine learning background, including a strong grasp of fundamentals in sequence modeling, generative models and common model architecture families (RNNs, CNNs, Transformers).
Experienced model trainer, ideally previously wrote and pretrained large-scale models.
Proficient in Python and Pytorch (or similar framework) and tensor programming more broadly.
Familiarity with efficiency tradeoffs in designing model architectures for accelerators such as GPUs.
At least 5 years of experience in implementing and training models including advanced degrees (MS/PhD, if any).
[bonus] Prior research experience in advancing state space models or implementing them in practice.
[bonus] Experience in optimizing model inference with CUDA, Triton or other frameworks.
Even if you don’t meet every requirement above, we'd encourage you to apply.
Our culture
🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.
🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality and design along the way.
🤝 We support each other. We have an open and inclusive culture that’s focused on giving everyone the resources they need to succeed.
Our perks
🍽 Lunch, dinner and snacks at the office.
🦷 Full health and dental benefits.
🍹 Unlimited paid time off.
✈️ Relocation assistance.
🦖 Your own personal Yoshi.
Cartesian is a specialist consulting firm in the telecoms, media and technology sector. For 30 years, we have helped clients worldwide build and execute strategies that transform the products, services and organizations that ultimately shape the ...
3 jobsSubscribe to Rise newsletter