CIO/CTO Checklist: Explaining AI and ML Algorithm Outcomes to Regulators

CIOs, CTOs, and heads of architecture must embrace best practices that support AI explainability.  

November 22, 2022 – In financial services, AI usage has taken off as compute and data storage resources have gotten cheaper over time. For FIs using AI, basic principles must be adhered to in the solution architecture to ensure nondiscriminatory and fair outcomes. Moreover, FIs must be able to demonstrate the fairness and transparency of their AI processes to various regulators—a challenge given that many algorithms are designed to be somewhat opaque.

This report discusses how CIOs and CTOs can ensure that the AI programs they use are explainable and transparent, keeping them in line with regulatory requirements. It is based on Aite-Novarica Group research from 2020 to 2022 and external articles on the topic, and it draws on the author’s experience, as well as best practices that Aite-Novarica Group has observed in financial services IT organizations.

Clients of Aite-Novarica Group’s Community Banking service can download this report and the corresponding charts.

This report mentions arize, Dask, DVC, Grafana, IBM Cloud Object Storage, Kubeflow, mlflow, Nvidia, PyTorch, scale, Snowflake, Spark, Spell,, Tecton, TensorFlow, and UbiOps.

Related Content

The 2022 Impact Awards in Cash Management & Payments

Aite-Novarica Group awards five technology innovators in cash management and payments in 2022  

Intelligent Decisioning for P/C Insurance: How AI Is Automating Insurance Business Processes

Insurance has been and will always be a data-driven industry.  

Get Summary Report

"*" indicates required fields

This field is for validation purposes and should be left unchanged.