About the Team
Our team is dedicated to shaping the future of artificial intelligence by equipping ChatGPT with the ability to hear, see, speak, and create visually compelling images, transforming how people interact with AI in everyday life. We prioritize safety throughout the development process to ensure that our most advanced models can be safely deployed in real-world applications, ultimately benefiting society. This focus on safety is central to OpenAI’s mission of building and deploying safe AGI, reinforcing our dedication to AI safety and promoting a culture of trust and transparency.
About the Role
We are seeking a research engineer to pioneer innovative techniques that redefine safety, enhancing the comprehension and capabilities of our state-of-the-art multimodal foundation models. In this role, you will conduct rigorous safety assessments and develop methods, such as safety reward models and multimodal classifiers, to ensure our models are intrinsically compliant with safety protocols. You will also help with red teaming efforts to test the robustness of our models, collaborating closely with cross-functional teams, including safety and legal, to ensure our systems meet all safety standards and legal requirements.
The ideal candidate has a solid foundation in multimodal research and post training techniques, with a passion for pushing boundaries and achieving tangible impact. Familiarity with large suites of metrics or human data pipelines is a plus. You should be adept at writing high-quality code, developing tools for model evaluation, and iteratively improving our metrics based on real-world feedback. Strong communication skills are essential to work effectively with both technical and non-technical stakeholders.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Build evaluation pipelines to assess risk along various axes, especially with multimodal inputs and outputs.
Implement risk mitigation techniques such as building safety reward models and RL.
Develop and refine multimodal moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse.
Work with other safety teams within the company to iterate on our content policies to ensure effective prevention of harmful behavior.
Work with our human data team to conduct internal and external red teaming to examine the robustness of our harm prevention systems and identify areas for future improvement.
Write maintainable, efficient, and well-tested code as part of our evaluation libraries.
You might thrive in this role if you:
Are a collaborative team player – willing to do whatever it takes in a start-up environment.
Have experience working in complex technical environments.
Are passionate about bringing magical AI experiences to millions of users.
Enjoy diving into the subtle details of datasets and evaluations.
Have experience with multimodal research and post-training techniques.
Are very proficient in Python.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Join OpenAI as a Research Engineer specializing in Multimodal technologies in San Francisco, and become an integral part of a team dedicated to shaping the future of artificial intelligence! In this exciting role, you will lead the charge in developing innovative safety techniques for AI systems that can hear, see, and speak. Your primary goal will be enhancing the comprehension and functionalities of our cutting-edge multimodal foundation models, which play a significant role in transforming everyday interactions with AI. Collaboration will be key; you'll work alongside cross-functional teams, ensuring that safety protocols are not only developed but implemented effectively in real-world applications. If you're passionate about pushing the limits of technology and creating AI that genuinely benefits society, this role is for you. We’re looking for someone with a strong background in multimodal research and post-training techniques, coupled with excellent coding skills, particularly in Python. You should also be prepared to dive deep into evaluating datasets and building tools for model evaluation. Come join us in a work environment that fosters creativity, safety, and teamwork, where your contributions will truly make a difference in the AI landscape. With a hybrid work model and relocation assistance, we welcome passionate individuals ready to bring magical AI experiences to users across the globe!
OpenAI is a US based, private research laboratory that aims to develop and direct AI. It is one of the leading Artifical Intellgence organizations and has developed several large AI language models including ChatGPT.
576 jobsSubscribe to Rise newsletter