Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.
We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.
We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.
The Role
• Lead the design, creation, and optimization of datasets for training and evaluating multimodal models across diverse modalities, including audio, text, video, and images.
• Develop strategies for curating, aligning, and augmenting multimodal datasets to address challenges in synchronization, variability, and scalability.
• Design innovative methods for data augmentation, synthetic data generation, and cross-modal sampling to enhance the diversity and robustness of datasets.
• Create datasets tailored for specific multimodal tasks, such as audio-visual speech recognition, text-to-video generation, or cross-modal retrieval, with attention to real-world deployment needs.
• Collaborate closely with researchers and engineers to ensure datasets are optimized for target architectures, training pipelines, and task objectives.
• Build scalable pipelines for multimodal data processing, annotation, and validation to support research and production workflows.
What We’re Looking For
• Expertise in multimodal data curation and processing, with a deep understanding of challenges in combining diverse data types like audio, text, images, and video.
• Proficiency in tools and libraries for handling specific modalities, such as librosa (audio), OpenCV (video), and Hugging Face (text).
• Familiarity with data alignment techniques, including time synchronization for audio and video, embedding alignment for cross-modal learning, and temporal consistency checks.
• Strong understanding of multimodal dataset design principles, including methods for ensuring data diversity, sufficiency, and relevance for targeted applications.
• Programming expertise in Python and experience with frameworks like PyTorch or TensorFlow for building multimodal data pipelines.
• Comfortable with large-scale data processing and distributed systems for multimodal dataset storage, processing, and management.
• A collaborative mindset with the ability to work cross-functionally with researchers, engineers, and product teams to align data strategies with project goals.
Nice-to-Haves
• Experience in creating synthetic multimodal datasets using generative models, simulation environments, or advanced augmentation techniques.
• Background in annotating and aligning multimodal datasets for tasks such as audio-visual speech recognition, video-captioning, or multimodal reasoning.
• Early-stage startup experience or a proven track record of building datasets for cutting-edge research in fast-paced environments.
Our culture
🏢 We’re an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.
🚢 We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don’t sacrifice quality and design along the way.
🤝 We support each other. We have an open and inclusive culture that’s focused on giving everyone the resources they need to succeed.
Our perks
🍽 Lunch, dinner and snacks at the office.
🏥 Fully covered medical, dental, and vision insurance for employees.
🏦 401(k).
✈️ Relocation and immigration support.
🦖 Your own personal Yoshi.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
At Cartesia, we're on a mission to redefine the future of AI with our groundbreaking Researcher: Multimodal (Data) role, located in the vibrant city of San Francisco. If you're passionate about shaping the next generation of AI systems that will process and reason over vast streams of audio, video, and text, then this is the opportunity for you. Your main focus will be on leading the design and optimization of datasets that will train and evaluate our innovative multimodal models. You will collaborate with talented researchers and engineers, harnessing your expertise in data curation to develop strategies that tackle synchronization and scalability challenges. You'll get to flex your creative muscles by designing methods for data augmentation and synthetic data generation, enhancing the diversity of datasets tailored for specific tasks like audio-visual speech recognition and cross-modal retrieval. At Cartesia, we pride ourselves on our fast-paced, collaborative culture where you can connect with coworkers who share your enthusiasm for cutting-edge technology. With a dynamic team that values quality and execution speed, we support each other every step of the way. If you have a strong background in multimodal data processing, programming expertise in Python, and a desire to work in an exciting and innovative environment, we'd love to hear from you. Join us and help us build the interactive intelligence that is set to change the world!
Founded in 1992, Cartesia, Inc. is a group of talented professionals providing custom solutions in the areas of engineering design automation, Web-based applications development, and Microsoft Windows-based software construction and integration. ...
7 jobsSubscribe to Rise newsletter