In Part One, we established why the CBN’s new Baseline Standards for Automated AML Solutions rank among the world’s best. Here, we examine the risks those Standards create and the hard governance work that genuine compliance requires.
A regulatory framework is only as valuable as the quality of its implementation.
The CBN has been explicit on this point from the opening pages of its new Baseline Standards – they are designed to ensure “demonstrable effectiveness and not merely feature-based compliance or vendor-driven implementation”.
That phrase is both an aspiration and a warning. It tells institutions precisely what the CBN will be looking for when it examines compliance and what will not satisfy it.
What follows is an analysis of the ten most significant risks embedded in the new framework, explained in terms that non-technical readers can follow, with the supporting detail and specific Standards references that Compliance Officers and Risk Managers need to act on.
AI models used for customer risk scoring draw on attributes the Standards explicitly reference – geography, occupation, declared income, transaction channel and customer segment (§5.5a.iv). These variables can act as proxies for demographic characteristics.
A model trained predominantly on urban, formally employed, high-income customers will systematically score customers outside that profile as higher risk – not because they are, but because their behaviour looks statistically unfamiliar to the model.
In Nigeria’s context, the practical implications are significant. The country’s financial system serves extraordinary customer diversity – informal traders, agricultural producers, diaspora remittance recipients and mobile money users whose transaction patterns bear no resemblance to a Lagos salary earner. Bias here is not merely an ethical concern; it is a legal one.
The Nigeria Data Protection Act (NDPA) 2023 confers rights on individuals in relation to automated decisions that significantly affect them. Institutions that cannot demonstrate equitable treatment across their customer base carry regulatory and legal exposure that compounds over time.
The Standards require fairness audits and bias testing as part of annual independent model validation (§5.5b.i). What they do not yet specify is a fairness metric, a testing methodology or an acceptable disparity threshold – a gap that institutions must fill in their own governance frameworks.
What institutions must do – Before any AI model is deployed, define the customer dimensions to be tested – at a minimum geography, income band, business type and transaction channel.
Run disaggregated performance analysis across each dimension before go-live and at every validation cycle. Document adverse findings and remediation steps. Report fairness metrics to the Board Risk Committee as a standing agenda item, not as an appendix.

![[EXPLAINER] 10 risks in Nigeria’s new AML rules and what banks must do about them – By Henry Nduka Onyiah CBN](https://ashenewsdaily.com/wp-content/uploads/2024/05/CBN-e1758091792722.jpg)