Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions.
We are the Artificial Generative Intelligence Security (AeGIS) team, charged with ensuring justified confidence in the safety of Microsoft's generative AI products. This encompasses providing an infrastructure for AI safety; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge.
As a Principal Software Engineer, you will:
We build the Microsoft-wide AI safety platform that enables our generative AI products and services to be safe and secure. We enable product engineering teams to utilize these safety services through robust APIs. We partner with security product teams to provide awareness of AI threats in their environments and take action against those threats.
Join us if you're passionate about security and technology in society, and want to be on the cutting edge of generative AI advancements.