The vast majority of enterprise data is in files like PDFs and spreadsheets. That includes everything from financial statements to medical records. Reducto helps AI teams turn those really complex documents into LLM-ready inputs with exceptional accuracy. This means they can build more reliable products while saving engineering time.
In less than a year we've scaled to 7 figures in ARR, serving customers from ambitious startups to Fortune 10 enterprises. We're now processing tens of millions of pages monthly.
Architecting and implementing robust, scalable inference systems for serving state-of-the-art AI models
Optimizing model serving infrastructure for high throughput and low latency at scale
Developing and integrating advanced inference optimization techniques
Working closely with our research team to bring cutting-edge capabilities into production
Building developer tools and infrastructure to support rapid experimentation and deployment.
Philosophy: You are your own worst critic. You have a high bar for quality and don’t rest until the job is done right—no settling for 90%. We want someone who ships fast, with high agency, and who doesn't just voice problems but actively jumps in to fix them.
Experience: You have deep expertise in Python and PyTorch, with a strong foundation in low-level operating systems concepts including multi-threading, memory management, networking, storage, performance, and scale. You're experienced with modern inference systems like TGI, vLLM, TensorRT-LLM, and Optimum, and comfortable creating custom tooling for testing and optimization.
Approach: You combine technical expertise with practical problem-solving. You're methodical in debugging complex systems and can rapidly prototype and validate solutions.
Have experience with low-level systems programming (CUDA, Triton) and compiler optimization
Are passionate about open-source contributions and staying current with ML infrastructure developments
Bring practical experience with high-performance computing and distributed systems
Have worked in early-stage environments where you helped shape technical direction
Are energized by solving complex technical challenges in a collaborative environment
This is an in person role at our office in SF. We’re an early stage company which means that the role requires working hard and moving quickly. Please only apply if that excites you.
Nearly 80% of enterprise data is in unstructured formats like PDFs
PDFs are the status quo for enterprise knowledge in nearly every industry. Insurance claims, financial statements, invoices, and health records are all stored in a structure that’s simply impractical for use in digital workflows. This isn’t an inconvenience—it’s a critical bottleneck that leads to dozens of wasted hours every week.
Traditional approaches fail at reliably extracting information in complex PDFs
OCR and even more sophisticated ML approaches work for simple text documents but are unreliable for anything more complex. Text from different columns are jumbled together, figures are ignored, and tables are a nightmare to get right. Overcoming this usually requires a large engineering effort dedicated to building specialized pipelines for every document type you work with.
Reducto breaks document layouts into subsections and then contextually parses each depending on the type of content. This is made possible by a combination of vision models, LLMs, and a suite of heuristics we built over time. Put simply, we can help you:
Accurately extract text and tables even with nonstandard layouts
Automatically convert graphs to tabular data and summarize images in documents
Extract important fields from complex forms with simple, natural language instructions
Build powerful retrieval pipelines using Reducto’s document metadata
Intelligently chunk information using the document’s layout data
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Are you excited about the rapidly evolving field of machine learning and AI? Reducto is looking for a talented LLM/ML Engineer (Inference) to join our San Francisco team! We know that enterprise data is primarily locked away in complex files like PDFs and spreadsheets, and we're on a mission to unlock that data and enhance the power of AI. As an LLM/ML Engineer at Reducto, your main focus will be architecting and implementing robust inference systems that serve our state-of-the-art AI models. You’ll be optimizing our model serving infrastructure to ensure high throughput and low latency at scale. You’ll collaborate closely with our research team to push the boundaries of what’s possible and bring cutting-edge capabilities into production. If you have deep expertise in Python and PyTorch and are experienced with systems like TGI and TensorRT-LLM, we want to meet you! We value high-quality work—you're your own worst critic and understand the importance of delivering top-notch results without compromises. Bonus points if you have experience in low-level systems programming and are passionate about open-source contributions! If you're ready to roll up your sleeves and contribute to solving complex technical challenges in a dynamic startup environment, Reducto is the place for you. We’re excited to see your application and hopefully welcome you to our innovative team that’s making strides in document data processing.
Subscribe to Rise newsletter