Once the foundation for the framework is established, the next step is to conduct a thorough risk assessment. This comprises three distinct phases:
Risk identification: The company proactively pinpoints potential risks across the entire AI lifecycle, from data sourcing to model deployment. Risks can be of four types: data risks, such as bias or privacy breaches; model risks, such as overfitting or a lack of robustness; operational risks, including deployment errors and adversarial attacks; and societal or ethical risks, such as discrimination or misuse of generative AI.
Risk analysis: The company then assesses each of these risks along two dimensions: their severity and their likelihood. Here, we recommend using the Human Rights, Democracy and Rule of Law Assurance Framework (HUDERAF), which combines these two dimensions to calculate a Risk Index Number (RIN). The HUDERAF not only makes it easier to prioritize risks, it also encourages organizations to look more closely at the potential impact on human dignity and the number of people who could be affected.
Risk evaluation: Finally, the organization must make strategic decisions, comparing the calculated risk levels against the established criteria – and on that basis classifying each risk as acceptable, tolerable or unacceptable. This classification guides the subsequent prioritization and resource allocation.