Microsoft have shared how they work with AI responsible in this paper Responsible AI Transparency Report How we build, support our customers, and grow. The report outlines Microsoft’s approach to building generative AI applications responsibly, adhering to six core values of transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security. The framework is all based around the govern, map, measure and manage cycle.
Govern
Establishes the context for AI risk management, including adherence to policies and pre-deployment reviews.
- Policies and principles
- Procedures for pre-trained models
- Stakeholder coordination
- Documentation
- Pre-deployment reviews
Map
Involves identifying and prioritizing AI risks and conducting impact assessments to inform decisions.
- Responsible AI Impact Assessments
- Privacy and security review
- Red teaming
Measure
Implements procedures to assess AI risks and the effectiveness of mitigations through established metrics.
- Metrics for identified risks
- Mitigations performance testing
Manage
Focuses on mitigating identified risks at both the platform and application levels, with ongoing monitoring and user feedback.
- User agency
- Transparency
- Human review and oversight
- Managing content risks
- Ongoing monitoring
- Defense in depth
These are all depicted in the diagram in the paper which is a very informative read.
Responsible AI Transparency Report How we build, support our customers, and grow
https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1l5BO
No comments:
Post a Comment
Note: only a member of this blog may post a comment.