FAR.AI is a non-profit AI research institute focused on ensuring the safe development and deployment of frontier AI technologies.
Since starting in July 2022, FAR.AI has grown to 19 FTE, produced 28 academic papers, and established the leading AI safety events for research and international cooperation. Our work is recognized globally, with publications at leading venues such as NeurIPS, ICML and ICLR that have been featured in the Financial Times, Nature News and MIT Tech Review. We leverage our research insights to drive practical change through red-teaming with frontier model developers. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running an AI safety focused co-working space FAR.Labs with 40 members; and through targeted grants to technical researchers.
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects.
Our current focus areas include:
building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)
finding more effective approaches to value alignment (e.g. training from language feedback)
Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).
We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.
To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We also host the semiannual Alignment Workshop with 150 researchers from academia, industry and government to learn about the latest developments in AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our recent post.
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyse experimental results, and participate in the write-up of results.
Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
Have experience with at least one object-oriented programming language (preferably Python).
Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
Common ML frameworks like PyTorch or TensorFlow.
Natural language processing or reinforcement learning.
Operating system internals and distributed systems.
Publications or open-source software contributions.
Basic linear algebra, calculus, vector probability, and statistics.
As a Research Engineer you would lead collaborations and contribute to many projects, with examples below:
Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or “jailbreaks” in language models, or is progress in average-case performance orthogonal to worst-case robustness?
Robustness of advanced AI systems. Explore adversarial training, architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zero-sum board games and language models.
Mechanistic interpretability for mesa-optimization. Develop techniques to identify internal planning in models to effectively audit the “goals” of models in addition to their external behavior.
Redteaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.
You will be an employee of FAR AI, a 501(c)(3) research non-profit.
Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
Hours: Full-time (40 hours/week).
Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
Please apply! If you have any questions about the role, please do get in touch at talent@far.ai.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
At FAR.AI, we’re on a mission to ensure the safe development and deployment of frontier AI technologies, and we need a Research Engineer in Berkeley to join our dynamic team! As a Research Engineer, you’ll work collaboratively with our talented research advisers and scientists, diving into the world of machine learning algorithms. You’ll develop scalable implementations and run scientific experiments that not only push the envelope of AI safety but also contribute to groundbreaking research papers submitted to prestigious venues. Unlike typical roles, this position offers you flexibility and variety; you'll play a crucial part in shaping research directions and analyzing experimental results with other experts, while also contributing to different projects as your interests grow. We value mentorship, so expect regular project meetings and code reviews to sharpen your programming skills. If you’re passionate about making an impact in AI safety and have the experience in software engineering or machine learning methods, we want to hear from you! Take advantage of a supportive environment that promotes autonomy and self-directed learning, as you explore AI safety research and enhance your machine learning expertise in an exciting non-profit setting.
far ai’s mission is to ensure ai systems are trustworthy and beneficial to society. we incubate and accelerate research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry. our current research ...
1 jobsSubscribe to Rise newsletter