Salesforce's Office of Ethical and Humane Use is seeking an experienced responsible AI data scientist to join their ethical red teaming practice. This role combines technical AI expertise with ethical considerations, focusing on identifying and mitigating potential risks in AI systems. The position involves conducting adversarial testing, analyzing safety trends, and developing solutions to detect and mitigate risks while collaborating with security, engineering, data science, and AI Research teams.
The role requires deep technical knowledge in both generative and predictive AI, with a focus on responsible and ethical AI practices. You'll be at the forefront of ensuring AI safety and robustness, working with cutting-edge technology while considering its ethical implications. The position offers the opportunity to work with Salesforce's renowned AI Research team on novel approaches to model safety.
Key aspects of the role include leading adversarial testing strategies, developing safety guardrails, conducting technical vulnerability assessments, and contributing to the broader AI safety community. The hybrid work environment requires 36 days per quarter in office, allowing for both collaborative work and flexible arrangements.
This is an ideal opportunity for someone passionate about ethical AI development who wants to make a significant impact in ensuring AI systems are developed and deployed responsibly. The role offers the chance to work with state-of-the-art AI technology while focusing on its ethical implications and safety measures. You'll be part of a team that's actively shaping the future of responsible AI development at one of the world's leading enterprise software companies.