You are currently viewing AI governance is not just about models

AI governance is not just about models

While AI has been used in enterprise and consumer products for decades, only large tech organizations with sufficient resources have been able to implement it at scale. In the past few years, advances in the quality and accessibility of ML systems have led to the rapid proliferation of AI tools in everyday life. The accessibility of these tools means that there is an enormous need for good AI governance by both AI providers and organizations that implement and deploy AI systems in their own products.

However, “AI governance” is an umbrella term that can mean a lot of different things to different groups. This post seeks to provide some clarity on 3 different “levels” of AI governance: organizational, use case, and model. We will describe what they are, who is primarily responsible for them, and why it is important for the organization to have strong tools, policies, and cultural standards for all three as part of the broader AI governance programme. We will also discuss how the upcoming regulations will intersect at each different level.

Governance at the organizational level

At this level, AI governance takes the form of ensuring that there are clearly defined internal policies for AI ethics, accountability, safety, etc., as well as clear processes for applying these policies. For example, to comply with the many emerging regulatory frameworks for AI, it is important to have a clear process for determining when and where AI is deployed. This process could be as simple as “Technical leadership team needs to approve AI system”, or as complex as the formal submission process to an AI Ethics Committee that has continuing/restricted power over AI uses. Good AI organizational governance for this purpose means ensuring that processes are clear, available to internal stakeholders, and enforced regularly.

The NIST AI risk management framework does a good job as we lay out all the different organizational governance policies and processes that need to be in place their government pillarand the The EU law on artificial intelligence sets out various regulatory requirements in Articles 9, 14 and 17 (Although this may still be updated before the last segment). Organizational governance standards and policies should be formulated by the senior leaders of the organization in consultation with the legal teams, who will ultimately be responsible for implementing and enforcing the policies. Organizational policies will largely determine when and where an organization chooses to deploy AI, as well as specify the types of documentation and governance considerations that need to be made at the use case level.

Use state level governance

Management at the use case level mostly focuses on ensuring that a given AI application, for a specific set of tasks, meets all necessary governance criteria. The risks of harm from AI are very context specific. One form can be used for multiple different use cases (especially foundational or general-purpose forms like GPT-4), with some use cases having relatively low stakes (eg: summarizing movie reviews), while a very similar task with a different context Very high stakes (eg: summarizing medical files). Besides the risks, many ethical questions for AI, including whether AI should be used in a task, are context related.

Governance at this level means ensuring that organizations carefully document the goals for using AI for the mission, the rationale for why AI is appropriate, as well as what the context-specific risks are and how they are mitigated (both technical and non-technical). These documents must comply with the standards set forth in the organizational governance policies as well as any of the regulatory standards. Both the NIST AI RMF Framework (MAP Function) and the EU AIA (Annex IV) define the types of “use case level” documents that must be created and tracked. Governance at this level will be owned by many different business units. The project leads (often product managers), in consultation with the legal compliance teams, will be the primary group responsible for ensuring that appropriate governance processes are followed and documented at this level. The risks and business objectives defined at this level largely define what the model level governance expectations should be for a given use case.

governance model

Model level management places a strong emphasis on ensuring that the actual technical functionality of the AI ​​system meets expected standards of fairness, accuracy, and security. It includes ensuring data privacy is protected, there are no statistical biases between the protected groups, the model can handle unexpected inputs, there is control for model deviation, etc. For the purposes of framing AI governance, the model level also includes data governance standards applied to the training data. Best practices and standards for model governance have already emerged in specific sectors often under the guise of ‘model risk management’, and many impressive tools have been developed to help organizations identify, measure and mitigate risk at the model level.

This is the level of governance that many AI/ML teams primarily focus on because they are ultimately responsible for ensuring that the technical foundations of the ML system are fundamentally sound and reliable. With the notable exception of NYC Provincial Code 144, most other regulations focused on AI do not provide for precise metrics or criteria at the model level. For example, the measurement function in the NIST framework defines what typical governance should look like without being prescriptive about exact metrics or outputs. The organization is expected to have clearly defined its testing, validation and monitoring strategies, and to demonstrate that those strategies are in place. It is likely that different sectors will develop precise metrics and standards for specific uses of AI over time, and they will be determined either by industry trade associations, or by sector-specific regulators such as the Food and Drug Administration.

Why this matters – Regulatory enforcement

Many of the proposed general regulations, such as the EU AI Act, focus heavily on regulatory levels and state uses. This focus is intuitive, as it can be very difficult to define model level requirements without hindering innovation, and can only be done for very specific use cases and types of AI systems. As such, the legislative process will be very slow to keep up. In addition, many lawmakers and regulators are not aware of the “lower-level” nuances of AI systems, but feel more comfortable writing laws around expected regulatory processes and societal outcomes. This knowledge is also likely to carry over into how regulators approach AI-focused law enforcement. Enforcing laws at the model level will be difficult for many regulators with even the best appointed advisors, but obvious flaws or loopholes at the use case or regulatory level will make enforcement easier. Moreover, many organizations are increasingly leveraging third-party AI/ML systems and may not have model-level governance, but still need a use case and organizational governance in place.

The AI ​​audit requirements of various proposed regulations may cover all three levels, with evidence requirements for enterprise policies, case risk assessments, and model testing. Organizations cannot make “AI governance” the exclusive responsibility of AI/ML teams, and instead need to ensure that organizational case, use case, and documentation processes are bulletproof and fully auditable in order to protect themselves from regulatory action.

Final thoughts

We strongly believe that building accurate, reliable, and unbiased ML models and practicing good data governance is essential to trustworthy and responsible AI systems. Every organization needs tools and processes in place to ensure that their models and third-party AI systems are well tested and effectively monitored. However, many stakeholders in overall AI governance, including regulators and the general public, will not have the skills to understand or evaluate AI governance practices at the model level, and so it is important to understand the different levels of AI governance abstraction these stakeholders will have. search in.

Leave a Reply