As artificial intelligence applications proliferate across African markets, governments are turning to an unexpected instrument for regulation: existing data protection laws. Rather than waiting for comprehensive AI frameworks that require lengthy legislative processes, countries including Nigeria, Kenya, Angola, and South Africa are embedding AI-specific provisions within their data protection legislation. The approach, described by analysts at the Future of Privacy Forum as a “backdoor” method of AI regulation, has become the defining feature of the continent’s second wave of digital policy reform.
The logic driving this strategy is straightforward. AI systems rely on vast quantities of personal data for training, credit scoring, facial recognition, and automated decision-making. By regulating how that data is collected, processed, and used, governments can exert meaningful control over AI applications without drafting entirely new legal frameworks. This approach has gained traction because data protection laws already exist in most African countries, providing an established regulatory infrastructure that can be expanded rather than built from scratch.
Angola offers the most explicit example of this strategy. Rather than pursuing a standalone AI law, the country is revising its 2011 Personal Data Protection Law to include detailed provisions targeting automated decision-making, credit scoring, and algorithmic transparency. The revisions introduce the right for individuals not to be subjected to decisions based solely on automated processing, particularly where those decisions have legal or significant effects. Companies are required to explain the logic behind algorithmic decisions, and individuals are given the ability to challenge such outcomes—provisions that closely mirror elements of the European Union’s AI Act but are embedded within a data protection framework.
Other countries are taking less explicit but equally consequential steps. Nigeria is exploring the regulation of social media platforms and developers within its data protection framework, while Kenya is tightening requirements for data controllers and processors in ways that directly constrain how AI systems operate. Botswana repealed and replaced its 2018 Data Protection Act in 2024 with an updated version introducing clearer rules on regulatory independence and mandating data protection impact assessments. These reforms reflect a recognition that first-generation data protection laws, introduced between 2010 and 2018, were often too vague to enforce effectively against sophisticated AI systems.
The urgency behind these reforms is grounded in real-world harm. A July 2025 study that audited ten credit-scoring algorithms across Nigeria, Kenya, and South Africa found consistent bias against women-led small businesses. In Nigeria, one major digital lender used training data that resulted in a 23 per cent lower loan approval rate for women, despite women demonstrating a 17 per cent better repayment record than men. For policymakers, this represents not merely a privacy concern but a fundamental accountability challenge. Data protection law has become the most readily available tool to address it.
Enforcement capacity remains uneven. Mercy King’Ori, who leads the Future of Privacy Forum’s Africa office from Nairobi, noted that institutional maturity among regulators remains a constraint. Yet enforcement is picking up. In July 2025, Nigeria’s National Data Protection Commission fined Multichoice ₦766 million for unlawful data transfers and intrusive data processing affecting both subscribers and non-subscribers. In Kenya, the Office of the Data Protection Commissioner fined a school KSh 4.55 million for publishing images of minors without parental consent, the largest penalty imposed on an educational institution in the country.
Underlying these reforms is a deeper question about data sovereignty. As global technology companies dominate AI development, African countries are increasingly concerned that their data will be extracted, processed offshore, and used to train systems that do not reflect local contexts. King’Ori explained that sovereignty does not necessarily mean isolation; rather, it involves identifying certain forms of data that should remain within borders for legal reasons. Countries such as Algeria are experimenting with data classification systems to manage the balance between national control and cross-border flows.
The patchwork nature of these regulations presents challenges for cross-border trade under initiatives such as the African Continental Free Trade Area. The African Union’s Data Policy Framework, updated in 2025 and 2026, provides a blueprint for building interoperable data systems, and the AfCFTA Digital Trade Protocol requires member states to align national laws to a common standard within five years of ratification. However, national interests continue to dominate, creating a fragmented regulatory landscape that businesses must navigate.
Standalone AI laws are already in motion. Kenya’s Artificial Intelligence Bill was formally introduced in the Senate on February 19, 2026, and South Africa is in active discussion. More are likely to follow. Yet for now, data protection remains the primary instrument, and not simply by default. Angola’s restrictions on data scraping, Ghana’s proposal to treat personal data as property, and Nigeria’s open-source large language model trained on five low-resource Nigerian languages all point to a more deliberate process: African governments are not copying global models; they are testing their own. As King’Ori noted, if the continent were not moving in the right direction, its laws would be static. Instead, they are dynamic.




