About Trace Machina:
Trace Machina is revolutionizing the software development lifecycle with NativeLink, a high-performance build caching and remote execution system. NativeLink accelerates software compilation and testing processes while reducing infrastructure costs, allowing organizations to optimize their build workflows. We work with clients of all sizes to help them scale and streamline their build systems efficiently and effectively.
As part of our growth, we are looking for a talented and innovative AI Safety Researcher to join our team. In this role, you will be responsible for researching and ensuring the safety, robustness, and ethical integrity of AI-driven systems, focusing on improving the reliability of automated build and testing processes. You will be at the forefront of making sure our systems are secure, fair, and capable of performing in complex environments.
Job Description:
As an AI Safety Researcher at Trace Machina, you will contribute to the development of AI-powered tools and systems that power NativeLink’s build caching and remote execution platform. You will focus on designing safe, reliable, and interpretable machine learning models for optimizing build processes while mitigating any potential risks related to automation and AI in the development lifecycle. You will collaborate closely with engineers and product teams to ensure that safety is prioritized throughout the development and deployment of AI-based solutions.
Job Responsibilities:
Conduct research into AI safety, focusing on robustness, fairness, and interpretability of machine learning models used in build systems
Develop algorithms and frameworks that ensure the safe deployment of AI-powered automation in software build, testing, and CI/CD workflows
Work closely with engineering teams to integrate AI safety mechanisms and ensure robust error handling and fault tolerance
Investigate and mitigate risks associated with AI-driven decision-making in distributed build systems, especially in mission-critical operations
Contribute to the development of safety-critical AI models for optimizing performance, caching accuracy, and task coordination across various customer environments
Conduct studies on the ethical implications of AI in software development, ensuring that algorithms used in NativeLink align with responsible AI principles
Perform in-depth testing, model validation, and risk assessment to ensure AI systems meet reliability and safety standards
Collaborate with product managers and engineers to translate research findings into practical tools and features for our customers
Required Skills and Experience:
3+ years of experience in AI/ML research, with a focus on safety, robustness, and interpretability
Strong background in machine learning theory, with practical experience implementing models and algorithms
Expertise in AI safety frameworks, fault tolerance, and risk mitigation strategies for AI systems
Experience with reinforcement learning, adversarial training, and robustness testing of AI models
Proficiency in programming languages such as Python, C++, or Go, with hands-on experience in AI development libraries (e.g., TensorFlow, PyTorch)
Strong understanding of AI ethics, fairness, and the impact of machine learning algorithms in real-world applications
Ability to identify potential safety risks in AI-driven systems and design solutions to address them
Familiarity with distributed systems, cloud infrastructure, and build/test automation frameworks
Excellent problem-solving skills, with the ability to work independently and collaboratively in a fast-paced environment
Nice to Have:
Experience with AI safety standards and best practices for building reliable AI models
Familiarity with the challenges of AI integration into large-scale software systems and CI/CD pipelines
Knowledge of adversarial machine learning techniques and safe exploration methods
Publications in AI safety, robustness, or ethics-related fields
Why Join Trace Machina?
Work at the cutting edge of AI-powered build optimization and testing tools
Contribute to the safety and reliability of AI-driven systems used by industry-leading customers
Collaborate with a dynamic, innovative team dedicated to solving complex problems
Opportunity to shape the future of AI safety in software development
Competitive salary and benefits package
Opportunities for personal and professional development
If you’re passionate about AI safety and want to help shape the future of AI-powered software development systems, we’d love to hear from you!
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
At Trace Machina, we are on an exciting journey to revolutionize the software development lifecycle with our groundbreaking platform, NativeLink. We are seeking a talented AI Safety Researcher to join our innovative team. In this role, you’ll be diving into the critical aspects of AI safety, focusing on making sure that our AI-driven systems are not only efficient but also ethical and robust. Your work will directly influence how we develop and deploy AI-powered tools that optimize build processes, enhance reliability, and reduce risks associated with automation in software development. You will collaborate closely with engineers and product teams, ensuring safety is part of every step of our development process. With a focus on fairness, interpretability, and robustness, you will help design safe machine learning models that drive automation in build, testing, and continuous integration workflows. Your expertise in AI safety frameworks will be invaluable as you investigate potential risks and create solutions to ensure our systems perform flawlessly in complex environments. If you’re passionate about making a positive impact in the world of AI and want to work on cutting-edge technology, then joining us as an AI Safety Researcher at Trace Machina could be your perfect opportunity. We’re looking for someone with a strong foundation in AI/ML research, experience with programming languages like Python or C++, and a keen understanding of AI ethics. Come be a part of a dynamic, innovative team dedicated to shaping the future of AI safety in software development. Let’s optimize and secure the future of technology together!
Become a key player at Trace Machina as a Full Stack Engineer, contributing to revolutionary software development solutions with TypeScript and Go.
Join Eurofins Scientific as a Senior Formulations Scientist to spearhead regulatory strategies in drug development.
Lead the research informatics strategy at Cook Children's Health Care System as the AVP, ensuring impactful solutions for pediatric healthcare.
Join ICON as a Clinical Trial Manager and leverage your expertise in a leading global clinical research firm.
Join Tracker Group as an Associate Analyst to contribute to impactful research and policy engagement in the realm of sustainable finance.
AbbVie is seeking a seasoned Director of Clinical Development to drive innovative clinical trials in their Research & Development department.
Ferring invites applications for an Associate Scientist II position, aimed at advancing groundbreaking microbiome therapies in their dynamic R&D environment.
Join IonQ as a Senior Physicist and contribute to the cutting-edge development of quantum computers in a collaborative team environment.
Join Culmen International as a Molecular Virology Lead Scientist to spearhead innovative research in viral genomic characterization and therapeutic development.
Subscribe to Rise newsletter