If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
If you’re passionate about understanding the inner workings of AI and want to take on cool challenges in research engineering, then the Research Engineer, Interpretability position at Anthropic might just be the perfect fit for you! We're a dynamic team working towards the exciting goal of making AI systems reliable and interpretable. In this role, you'll collaborate with talented researchers and engineers to dive deep into mechanistic interpretability, focusing on how neural network parameters translate into meaningful algorithms. Forget about mundane tasks; you'll be implementing and analyzing research experiments, building tools to improve models, and scaling your efforts across multiple GPUs. We value a collaborative spirit, and you'll have the chance to work on impactful projects like optimizing pipelines or developing interactive visualizations. With a friendly work environment that encourages diverse perspectives and communication, we aim to ensure that AI is not only advanced but also safe and ethical. If you have 5-10 years of software experience and a strong proficiency in programming, especially Python, we want to hear from you. Plus, the salary range is competitive, reaching up to $560,000 per year. Join Anthropic, where your passion for AI can contribute to our mission to build beneficial systems that society can trust!
Anthropic is an AI startup public-benefit company dedicated to AI safety and research, aiming to develop dependable, interpretable, and controllable AI systems. The company was was founded by former members of OpenAI in 2021.
265 jobsSubscribe to Rise newsletter