Microsoft’s AI Red Team just published “Lessons from
Red Teaming 100 Generative AI Products.” Their blog post lists “three takeaways,” but the eight lessons in the report itself are more useful:
- Understand what the system can do and where it is applied.
 - You don’t have to compute gradients to break an AI system.
 - AI red teaming is not safety benchmarking.
 - Automation can help cover more of the risk landscape.
 - The human element of AI red teaming is crucial.
 - Responsible AI harms are pervasive but difficult to measure.
 - LLMs amplify existing security risks and introduce new ones.
 - The work of securing AI systems will never be complete.
 
