Sign up for our
weekly
newsletter
of fresh jobs
The Digital Safety Unit at Microsoft has the mission to empower every person to have safe and trusted digital experiences across Microsoft products and services. We are seeking a highly skilled and experienced Principal Product Manager, Digital Safety Investigations to lead the investigations and follow-up remedies in the use of Artificial Intelligence to create abusive or misinformation content.In this role you will develop and manage highly technical investigations involving Artificial Intelligence generated abusive or deceptive content. You will work closely with cross-functional teams, including product managers, engineers, attorneys, public policy professionals, content policy professionals and investigators to define the investigative approach necessary to identify those using AI to generate abusive content. A successful candidate will have extensive experience in current AI systems and tools, current methods of identifying AI generated content, threat intelligence, behavioral analysis, live and traditional forensics as well as a deep technical understanding of how criminals are abusing the Internet generally, for the purpose of harming customers and citizens. The position will work collaboratively with other Microsoft Investigative teams, managing cross team projects to achieve defined objectives. It requires a commitment to excellence and the ability to work both independently as well as an integral part of a high performing team.Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.Responsibilities• Independently identifies and triages current threats and trends in AI abuse to determine opportunities and set investigation priority aligned with organizational priorities. Organizes information and defines next steps for investigations of high risk, impact, or exposure. Facilitates discussions on developing tactical investigation strategy and direction. Assesses, communicates, and makes recommendations around immediate and potential risks to leadership and key stakeholders, and offers suggestions for overcoming potential obstacles in a timely manner.• Directly partner with teams of investigators and engineers from multiple internal groups and external partners to leverage their unique abilities in threat intelligence, data gathering, and analysis to collect highly complex data and content. Conduct complex analysis on information gathered to validate facts, generate insights and provide summaries in support of remediation or legal action.• Resolve complex alignment issues, contribute to portfolio-level solutions, and drives cross-functional clarity for digital safety with leadership, including the Corporate Leadership Team and Board of Directors.• Independently identifies and interprets broader, complex sources of information and appropriate investigative tools and technologies to resolve investigations of high risk, impact, or exposure. Coordinates and collaborates with relevant internal stakeholders to extract information on the case. Distills information from complex data to prepare investigation plans and procedures.• Independently identifies, gathers, and reviews underlying documentation, internal (e.g., financial reports, communication records, network logging) and external data (e.g., social media), relevant facts, and internal and external industry trends related to the case. Compiles data from different complex sources, tracks and records the findings from these documents to assist the investigation. May collaborate with analysts or data scientists to source and interpret relevant data.• Other• Embody our Culture and ValuesQualificationsRequired Qualifications• Bachelor’s Degree AND 8+ years experience in product/service/project/program management or software development• OR equivalent experience.• Practical knowledge and/or experience in trust and safety, including content moderation, privacy or responsible AI.• Proficient knowledge in Content manipulation and AI image generation• Proficient knowledge in intelligence analysis and reporting using common tools and techniquesAdditional Or Preferred Qualifications• Bachelor's Degree AND 10+ years experience in product/service/project/program management or software development• OR equivalent experience.• Demonstrated experience managing multi-disciplinary and cross organizational teams to conduct complex investigationsProduct Management IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-payMicrosoft will accept applications for the role until September 5, 2024.Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.