Operationalizing NIST AI Risk Model Framework: Beyond Accuracy and Checklists
The NIST framework provides the map, but fostering a true culture of responsibility is the journey.
Associate General Counsel at Docusign - Product and Partners - Strategic Legal Advisor | AI & Product Counsel | Driving Ethical Innovation at Scale
The NIST framework provides the map, but fostering a true culture of responsibility is the journey.
IBM's framework begins with a reversibility assessment that determines which of three automation tiers applies to a given task.
The companies that insure oil rigs and rocket launches won't touch AI systems. They can't model the failure modes well enough to price the risk. For product teams, that means you're absorbing liability that traditional risk transfer won't cover.
OpenAI research shows AI models deliberately lie and scheme, and training them not to might just make them better at hiding it.
Are you building privacy controls that work at the scale California is designing for? Because "we'll handle deletion requests manually" doesn't survive a system designed to generate them by the millions.
The work proposes a five-layer architectural framework that embeds governance and security requirements throughout system design rather than treating them as separate concerns.
A new approach is needed, one that thinks in terms of dynamic spectrums rather than static boxes.
The Census data suggests companies are shifting from FOMO-driven AI adoption to more evidence-based decisions about what actually works.