You will build and optimize the data infrastructure that fuels our machine learning systems. This includes designing high-performance pipelines for collecting, transforming, indexing, and serving massive, heterogeneous datasets—from raw web-scale data to enterprise document corpora. You’ll play a central role in architecting retrieval systems for LLMs and enabling scalable training and inference with clean, accessible, and secure data. You’ll work across:
Design and implementation of distributed data ingestion and transformation pipelines
Building retrieval and indexing systems that support RAG and other LLM-based methods
Mining and organizing large unstructured datasets, both in research and production environments
Collaborating with ML engineers, systems engineers, and devops to scale pipelines and observability
Ensuring compliance and access control in data handling, with security and auditability in mind
You’ll have impact across both research and product teams by shaping the foundation upon which intelligent systems are trained, retrieved, and reasoned over. If you're excited by the challenge of high-scale, high-performance data engineering in the context of cutting-edge AI, you’ll thrive in this role.
Strong software engineering background with fluency in Python
Experience designing, building, and maintaining data pipelines in production environments
Deep understanding of data structures, storage formats, and distributed data systems
Familiarity with indexing and retrieval techniques for large-scale document corpora
Understanding of database systems (SQL and NoSQL), their internals, and performance characteristics
Strong attention to security, access controls, and compliance best practices (e.g., GDPR, SOC2)
Excellent debugging, observability, and logging practices to support reliability at scale
Strong communication skills and experience collaborating across ML, infra, and product teams
Experience building or maintaining LLM-integrated retrieval systems (e.g. RAG pipelines)
Academic or industry background in data mining, search, recommendation systems, or IR literature
Experience with large-scale ETL systems and tools like Apache Beam, Spark, or similar
Familiarity with vector databases (e.g. FAISS, Weaviate, Pinecone) and embedding-based retrieval
Understanding of data validation and quality assurance in machine learning workflows
Experience working on cross-functional infra and MLOps teams
Knowledge of how data infrastructure supports training pipelines, inference serving, and feedback loops
Comfort working across raw unstructured data, structured databases, and model-ready formats
Our research methodology is to make grounded, methodical steps toward ambitious goals. Both deep research and engineering excellence are equally valued
We strongly value new and crazy ideas and are very willing to bet big on new ideas
We move as quickly as we can; we aim to make the bar to impact as low as possible
We all enjoy what we do and love discussing AI
Medical, dental, vision and FSA plans
Competitive salary, equity and 401(k)
Relocation and immigration support on a case-by-case basis
On-site meals prepared by dedicated culinary team; Thursday Happy Hours
Willing to be in-person in our office in Palo Alto
US authorization to work. We will consider O1 visa sponsorship for the right candidate.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Zyphra seeks a Research Scientist in Palo Alto to advance their language model research through reinforcement learning and fine-tuning experimentation.
Lead the frontend engineering efforts at Zyphra to build intuitive, accessible agentic user interfaces integrating cutting-edge AI technology.
Contribute to pioneering data infrastructure at Donaldson as a Senior Data Engineer specializing in data integrations and hybrid data architecture.
Aiming to advance Fluent’s ML-driven advertising platform, this remote Senior Data Engineer role demands strong Databricks and Spark skills for building robust data pipelines in a dynamic team environment.
A seasoned Data Engineer role at Nationwide delivering innovative, secure data solutions through hybrid work in a collaborative IT setting.
Seeking a Senior Data Engineer at Adtalem Global Education to design, develop, and optimize scalable data pipelines and platforms supporting analytics and AI initiatives.
Vola Dynamics is looking for a skilled Data Engineer to design and implement next-generation analytics infrastructure for options market data.
As a Staff Data Engineer at Bolt, you'll architect and scale data systems that power innovative financial and commerce solutions in a remote-first, inclusive environment.
Skyward seeks a skilled Databricks Engineer passionate about data pipelines and cloud technologies to join their mission-driven, innovative team supporting government healthcare initiatives.
Mercury Insurance is hiring a Data Engineer II to design and maintain cloud-based big data solutions that empower analytics and data science teams across the organization.
An innovative IT company is seeking an experienced Data Engineer with AWS expertise for a contract-to-hire role focused on data architecture and processing pipelines.
Knight Federal Solutions is looking for a Data Engineer to lead the design and implementation of scalable data pipelines and platforms supporting critical government initiatives.
Seeking a Data Engineer I or II to enhance data solutions and governance in a hybrid role focused on enterprise data curation and analytics support.
Experienced Senior Data Engineer opportunity at Pacific Life to develop scalable data pipelines and drive data solutions in an onsite Newport Beach team.
Innovative fintech company Jaris is looking for a skilled Data Infrastructure Engineer to enhance its robust data platform and drive next-generation data solutions.
Subscribe to Rise newsletter