About the Team
The ChatGPT RLHF team is a specialized subteam within the Post-Training organization, focused on aligning ChatGPT models with user needs through Reinforcement Learning with Human Feedback (RLHF) and related approaches. Our mission is to make ChatGPT more helpful and personalized for users, creating a better experience by learning from large-scale feedback. The team develops the science of reward modeling, scales feedback-driven training, and ensures our models deliver both correctness and nuanced, human-preferred behavior.
We collaborate closely with research, product, and applied teams to deliver measurable improvements in model quality and user experience. Our work directly impacts millions of users globally and contributes to OpenAI's mission of broadly distributing safe AI.
About the Role
As a Research Engineer or Scientist on the ChatGPT RLHF team, you will contribute to the development of advanced reward models and RL techniques to align ChatGPT models with user preferences. This is a dynamic role combining cutting-edge research with engineering, requiring a passion for building impactful, user-focused AI systems.
Location
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Advance research on reinforcement learning and reward modeling to enhance ChatGPT's alignment with diverse user preferences.
Build robust offline evaluations and metrics to predict the impact on the product.
Collaborate with cross-functional teams to deploy models in production and iterate quickly based on real-world feedback.
You might thrive in this role if you:
Bring 2+ years of experience in reinforcement learning, RLHF, or large-scale machine learning systems, with experience in user-facing applications.
Hold a Ph.D. or equivalent research experience in machine learning, computer science, or a related field, demonstrating a strong ability to drive impactful research.
Possess hands-on experience with RLHF, recommender systems, or feedback-driven model training, and a deep understanding of how to integrate these into real-world systems.
Why this role?
The ChatGPT RLHF team operates at the intersection of research and product, shaping the future of AI-powered interactions. You'll have the opportunity to work on impactful, user-facing problems while tackling some of the most exciting challenges in AI alignment and model optimization.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Join us at OpenAI as a Research Engineer on the ChatGPT RLHF team in San Francisco! This dynamic role is a perfect blend of cutting-edge research and practical engineering, aimed at aligning ChatGPT models with user preferences through innovative Reinforcement Learning with Human Feedback (RLHF) techniques. As part of our specialized team within the Post-Training organization, your main focus will be developing advanced reward models to enhance the user experience by extracting and parameterizing large-scale feedback. Picture this: collaborating with cross-functional teams to deploy models and rapidly iterate based on real-world feedback makes your contributions tangible and impactful. Not only will you engage in scientific exploration, but you will also be a driving force in improving AI interactions that resonate with millions of users globally. With a hybrid work model offering flexibility, you’ll thrive while advancing research on RLHF and building robust evaluations that lead to measurable improvements in model quality. We value individuals who hold a strong academic background paired with hands-on experience in reinforcement learning and model training. If you’re passionate about creating user-focused AI systems and ready to tackle exciting challenges, this is the role for you. Let’s shape the future of technology together!
OpenAI is a US based, private research laboratory that aims to develop and direct AI. It is one of the leading Artifical Intellgence organizations and has developed several large AI language models including ChatGPT.
550 jobsSubscribe to Rise newsletter