
Communications minister Solly Malatsi has gazetted South Africa’s draft national AI policy for public comment, opening a 60-day window for submissions on an 86-page document that proposes a sweeping new institutional architecture – including seven new oversight bodies – to govern the development and deployment of AI across the economy.
The draft policy, published in the Government Gazette on Thursday, was approved by cabinet on 25 March. It sets out six strategic pillars and dozens of proposed interventions spanning education, digital infrastructure, ethics, data governance and public sector deployment.
But its most striking feature is its bureaucratic ambition. The policy proposes the creation of:
- A National AI Commission (or National AI Office) to coordinate policy and implementation;
- An AI Ethics Board to enforce ethical governance around bias, privacy and fairness;
- An AI Regulatory Authority to conduct audits and issue certifications;
- An AI Ombudsperson Office to let citizens challenge AI-driven decisions;
- A National AI Safety Institute;
- An Integrated AI-Powered Monitoring Centre; and, perhaps most unusually
- An AI Insurance Superfund modelled on the Road Accident Fund to compensate individuals or entities harmed by AI systems where liability is difficult to determine.
In addition, the policy envisages repositioning communications regulator Icasa for what it calls “an AI-driven regulatory future”, expanding its mandate to oversee ethical AI use in telecommunications, ICT and broadcasting.
A new National AI Regulatory Forum, coordinated by the department of communications & digital technologies, would bring together Icasa, the Information Regulator, the Competition Commission, the South African Reserve Bank, the Financial Sector Conduct Authority, the CSIR and the department of trade, industry & competition to coordinate oversight.
The scale of what is being proposed raises obvious questions about capacity and funding, particularly given government’s well-documented struggles to resource existing institutions. The policy itself does not attach specific budget figures to any of the proposed bodies, though it calls for funding to be secured during the second year of a three-year implementation road map.
Notably, the department of communications has been candid about the document’s status. An explanatory note published alongside the policy describes it as “a work in progress” and says the government’s final approach “will require extensive external consultations with both local and international experts and interest groups”.
‘Current thinking’
The draft “should thus be seen as a point of departure and indication of government’s current thinking, rather than a strict indication of South Africa’s final approach to the AI policy landscape”.
This extends to the regulatory approach itself. The document presents four broad options – an ethics-first approach, a flexible iterative model using regulatory sandboxes, an economy-focused strategy and alignment with global standards – and says a combination of all four, tailored to different sectors, would be ideal. It also floats several further options without choosing between them: principles-based regulation, a guardrails approach, a “just AI” framework focused on redressing inequality, and even dedicated AI legislation in sectors where it is appropriate.
Read: How AI is transforming the machinery of war
The policy builds on the national AI policy framework published by the communications department in August 2024. It draws on 32 submissions received on that framework and consultations with government structures through the Cabinet cluster process.
The draft identifies education, healthcare and agriculture as the critical sectors for AI implementation, with public administration as a key lever. It calls for AI to be integrated into school curricula from primary to tertiary education, for community-based AI education centres to be established in underserved areas and for a labour market transition strategy to manage job displacement.
On infrastructure, the policy calls for investment in supercomputing facilities, 5G and future 6G networks, high-capacity fibre and last-mile connectivity via low-Earth-orbit satellites. It goes further, proposing that universal internet access be framed as a “socioeconomic right” and calling for the establishment of “regional AI factories” – decentralised compute hubs intended to promote local data control and stimulate regional economies.
The policy also proposes that non-private, non-regulated data be treated as a public good and that government create incentives for open data initiatives. It calls for public broadcasting content to be made available to developers of language models, and for AI-powered real-time translation across all of the country’s official languages.
Read: The AI jobs reckoning is here
The ethical dimension of the policy is grounded in the constitution and Bill of Rights. The document lists specific constitutional sections that AI must not be used to violate, and frames the African philosophy of “ubuntu” – with its emphasis on interdependence, community and shared responsibility – as a guiding lens for AI development.
Proposed safeguards
Among the proposed safeguards are:
- Mandatory human rights and gender impact assessments for AI systems, particularly in high-risk domains;
- Mandatory human-in-the-loop oversight for critical AI decisions; and
- Requirements for “sufficient explainability” and “sufficient transparency” in public sector and high-risk AI systems.
The policy also proposes protections for children against manipulative AI systems, including exploitative advertising and gamified features that encourage excessive screen time.
Read: Cabinet approves draft AI policy for public comment
The department of communications envisions a staged implementation over three years. In the current financial year (2025/2026), the policy would be finalised, key draft regulatory requirements for “unacceptable risks” would be identified and published, and work would begin on national AI policy guidelines.
In year two (2026/2027), those guidelines would be published, regulatory requirements for high-risk use cases would be implemented and sectoral AI strategies would be developed. Full implementation is targeted for 2027/2028.
The policy framework would undergo a comprehensive review every three years or earlier if triggered by significant technological or legislative shifts.
Written submissions on the draft are due by 10 June 2026. – (c) 2026 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here.
