Anthropic is seeking a Research Engineer / Scientist for their Alignment Science team in London. The role involves building and running machine learning experiments to understand and steer the behavior of powerful AI systems. Key responsibilities include testing safety techniques, running multi-agent reinforcement learning experiments, building tooling for LLM jailbreak evaluations, and contributing to research papers. The ideal candidate has software and ML experience, familiarity with AI safety research, and a collaborative mindset. Strong candidates may have experience with LLMs, reinforcement learning, and complex codebases. The role offers competitive compensation, equity, and benefits, with a focus on making AI helpful, honest, and harmless.