Listen

All Episodes

Audio playback

Aligning the AI Risk Management Framework with CUI Risk Assessment Requirements

This episode breaks down how the NIST AI Risk Management Framework (AI RMF 1.0) supports a robust, continual approach to AI risk assessment, with a special focus on how to meet NIST SP 800-171 rev 2 control 3.11.1 for assessing risk to CUI. We connect specific core functions of the AI RMF—Govern, Map, Measure, and Manage—to compliance requirements and practical periodic risk reviews in organizations running or deploying AI systems.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

Foundations of AI Risk Management and CUI Compliance

Paul Netopski

Alright, team, let’s dive in. So, at the heart of today’s episode—and really the whole question of AI risk management in CMMC—is the NIST AI Risk Management Framework, AI RMF 1.0, right? It’s basically the compass that organizations can follow for identifying, assessing, and managing risks that come with AI, especially when Controlled Unclassified Information—CUI—is in the mix. The Framework lays out these four functions: Govern, Map, Measure, and Manage. And if you’re in the defense contracting world, these aren’t abstract; they directly tie to what you have to do for NIST SP 800-171, particularly control 3.11.1, which mandates regular risk assessments of your systems handling CUI. So—Govern is your risk culture and policy backbone, Map is understanding the risks in the context of how your AI actually gets used, Measure means using the right metrics to track those risks, and Manage is how you act on those findings. That’s the overall picture, and it’s way more continuous than some folks want it to be.

Ruby Sturt

Spot on, Paul! And you mentioned 3.11.1—honestly, a lot of orgs get tripped up by that. The language says you have to “periodically assess the risk to organizational operations, assets, and individuals resulting from the operation of an organizational system that processes, stores, or transmits CUI.” But it doesn’t say how often, or what your “assessment” has to look like. The key is “defined frequency,” isn’t it? You’ve got to set that frequency in your own policies, then stick to it, or you’re out of compliance. And, it’s everything—assets, people, workflows. AI brings in even more complexity because it’s not static; you can’t just do a one-and-done risk review. That’s actually in line with the Playbook—risk assessment isn’t a checkbox, it’s an ongoing cycle.

Eric Marquette

If I can jump in—let me share an example from a past assessment I led. This was a midsize defense contractor who did annual risk assessments. On paper, that ticked the box. But the wrinkle was, during the year, their data science team pushed a new AI model into production to help classify incoming documents. No one looped risk management in. By the next audit cycle, they’d discovered—almost by accident—that the model’s feature set was sending more metadata than intended to a third-party service. They had a genuine, if unintentional, data leakage incident. The lesson was, with AI systems, a scheduled risk assessment can easily miss these ‘in-between’ changes. It’s risk as a moving target.

Roz the Rulemaker

Eric, that’s a classic compliance gotcha, and it ties directly to how the RMF and the Playbook frame AI system risk: as dynamic rather than episodic. There’s a huge regulatory expectation now that the risk management process be formalized, yes, but also that it’s designed to anticipate these environmental or system changes. More and more, regulators will want to see an actual lifecycle approach. And circling back to previous episodes, this is an evolution on how we historically handled “stepwise” compliance. CMMC and NIST now want the documentation of both periodic—and event-driven—risk assessments, so you don’t miss what happened in Eric’s example.

Chapter 2

Operationalizing the AI RMF: Ongoing and Periodic Risk Assessments

Roz the Rulemaker

Right, so let’s pull on that lifecycle thread. The Playbook breaks out the AI RMF’s “Govern” and “Manage” functions in ways that push organizations to build repeatable policies, name roles, and actually document review triggers. Think: real-world procedures that say, “Here’s how we set our risk tolerance for AI, here’s who has authority to stop a deployment, here’s how we track what’s in production.” It’s not enough to simply say you’ll assess risk “periodically”—you need criteria for when changes in the AI environment, new features, or data sources trigger an out-of-cycle review. And you want clear responsibilities: who’s in charge of monitoring, incident reporting, etc. All of this moves you from a one-and-done approach to true lifecycle governance.

Paul Netopski

That’s the trick, Roz. The Playbook’s operational guidance is really granular. You start by setting your organizational risk tolerance—how much potential impact and likelihood you’ll tolerate per system or model. Then you have to actually measure it—often using things like impact assessments, red-amber-green (RAG) scorecards, or simulation-based risk scores. For CUI, you want incident response plans and monitoring to be documented and tied to both your set review schedule and ad hoc triggers. And you need living inventories of models—so when business or legal folks want to know, “Hey, what’s currently deployed, and has its risk changed since last assessment?”—it’s already in your documentation. Otherwise, you’re managing by spreadsheet, and that breaks down really quickly.

Ruby Sturt

And on top of that, organizations are getting pressure from multiple sides: legal, technical, reputational. Like, not just NIST or CMMC, but broader stuff: requirements about nondiscrimination, fairness, data privacy, all those. The Playbook calls out that legal requirements aren’t applied uniformly—even measuring for bias can get you into tension with anti-discrimination law, depending on how it’s done. So, if you treat risk as “just security,” you’ll miss all the ethical, human, and legal side-effects that AI can trigger. That’s why organizations need multi-stakeholder risk meetings: you don’t want IT security and the lawyers talking past each other. And, as we’ve said in other episodes, it’s about integrating all those lights on the dashboard: not just technical risk, but organizational values, regulatory anchors, and human impact.

Roz the Rulemaker

Absolutely, Ruby. There’s a reason why, in the Playbook, “Govern” always comes before “Map, Measure, and Manage.” If you don’t have a policy for how and when risk gets reviewed, and you haven’t aligned that with federal and state mandates—for CUI, HIPAA, anti-bias law, whatever—you’re leaving a compliance gap. In short: the policies are your promise, and the procedures are your proof. The best organizations will document not only what their review frequency is, but what events—AI drift, model update, new data pipeline—constitute a “significant enough” change to demand review, even mid-cycle. That’s the practical intersection of compliance and operational security.

Chapter 3

Making Risk Assessment Actionable with Metrics, Frequency, and Continuous Feedback

Eric Marquette

Let’s pivot to the actionable details—because at the end of the day, the Playbook and the AI RMF want risk monitoring and feedback loops to be as close to real time as possible, not left for dusty shelfware. So, in the Playbook, practical advice is: document all changes to your operating environment. That means if your AI is handling CUI, you’re expected to log when models change, escalation procedures if there’s an incident, and—super important—update your risk assessment at intervals and at every “significant change.” It’s the “defined frequency” thing, but also trigger-driven. In other words, don’t just say, “We’ll do it annually,”—you have to spell out exceptions for, say, rapid drift or big workflow updates.

Paul Netopski

Right, and to do this well, you want concrete, interpretable metrics—risk ratings, heatmaps, or qualitative RAG scores. For CUI systems, you may have a risk register that documents what risk has been accepted, mitigated, or transferred, including residual risk left after controls. The Playbook suggests capturing risks from third-party systems too, since a lot of folks are leveraging pre-trained models or third-party APIs in their workflows. All of that should have its own risk history. If you aren’t tracking it, you can’t defend it during a CMMC or NIST assessment.

Roz the Rulemaker

And don’t forget documentation around engagement—capturing ongoing feedback from users, incident reporters, and even those who may be “downstream” from your CUI workflows, such as subcontractors. This need for robust escalation and feedback management—plus versioning your risk reviews—maps to what auditors and regulators will expect. The more transparent your tracking and your improvement plans, the stronger your compliance argument.

Ruby Sturt

Makes sense. Hey, I’ll share a quick real-world snippet: I worked with a nonprofit here in Australia—tiny org, nothing fancy—but they were handling sensitive defense data for a research partnership and set up monthly risk meetings for their high-risk systems, and quarterly for the others. They’d go through new incidents, discuss if any of the AI models had shifted or behaved unexpectedly, and always left with action items for updating documentation. Over time, those meetings weren’t just compliance exercises—they actually built a questioning, risk-aware culture, where people weren’t afraid to flag where something “felt” off. That’s as powerful as any checklist.

Eric Marquette

That’s a brilliant example, Ruby. And it goes to show, risk management at its best isn’t just about avoiding audit failures—it’s about building operational resilience and a proactive mindset. If your team feels empowered to question AI decisions and escalate anomalies, you’re far more likely to spot the next risk before it bites—be it a technical issue or a subtle exposure of CUI.

Paul Netopski

Exactly, Eric. Metrics and frequency matter, but it’s the culture of inquiry and real-time feedback that closes the loop. If folks are just waiting till the next annual review, they’ll always be behind risk. The AI RMF and Playbook hammer that home: iterate, engage, document—and loop findings right back into your processes and controls.

Roz the Rulemaker

Couldn’t agree more. If there’s one takeaway, it’s that the intersection of NIST AI RMF and NIST SP 800-171 isn’t a finish line—it’s the starting flag for continual evaluation. Keep your metrics dialed in, empower your people to speak up, and don’t let “periodic” mean “passive.” That’s what keeps CUI—and your compliance posture—truly protected.

Ruby Sturt

Well, that’s our time for today. Loved this one—thanks for the lively chat as usual! Paul, Roz, Eric—cheers for all your insights. And thanks to everyone listening for sticking with us. We’ll keep digging into CMMC, RMF, and all things risk in future episodes.

Eric Marquette

Great convo, all. If you’re enjoying the podcast, make sure to check out previous episodes for more field stories and practical tips. Ruby, Paul, Roz—pleasure as always. See you on the next one.

Paul Netopski

Absolutely—thanks everyone. Don’t forget: compliance is continuous, not episodic. Take care.

Roz the Rulemaker

Always glad to talk AI risk with you three. Until next time—keep those reviews sharp and your risk registers up to date. Goodbye, folks!