1. Work closely with LLM core teams to devise a red-team strategy for LLM and manage internal red-teaming operations.
2. Coordinate resources across T&S, Data, and product teams to execute red-team tests and generate comprehensive reports, which include, but are not limited to, vulnerability assessments.
3. Work with internal researchers and external experts on monitoring the latest developments and industry best practices on red-teaming and advise on the long term strategy.
4. Lead research on novel issues related to privacy and also broader questions of fairness, accountability, and transparency related to LLMs.
5. Set the research directions and strategies to make our AI systems safer, more aligned and more robust.
No results available
Reset