
Government policy on artificial intelligence in South Africa will be finalised in the 2026/2027 financial year, with the government soon expected to start carrying out an implementation plan following a recent colloquium on developing and implementing the AI policy framework.
The draft policy will be discussed by the economic cluster ministerial council next month, before it heads to a cabinet committee. It is expected to be gazetted later in March for public comment for 60 days.
Referring to the framework, communications department deputy director-general Alfred Mmoto, who heads policy development and strategy, told the parliament’s portfolio committee on communications & digital technologies that between now and June, a quarterly digital economy stakeholder forum with five workstreams will be set up.
The communications ministry will establish a priority working group comprising the department, the South African Local Government Association and the cooperative governance department to improve the implementation of by-laws and launching a concept document for a “national evidence hub”.
From July to December next year:
- A regulators forum will be operationalised;
- The national digital skills framework will be reviewed and updated;
- The department will establish and resource an investment desk and start-up desk, launch ICT sector regulator sandboxes and develop a legislative co-ordination road map; and
- Establish and maintain South Africa’s position in global digital forums.
Stakeholders will convene for a regulatory impact assessment on quick wins and launch a process to define what meaningful access means.
South Africa started developing its policy on AI in 2020. While it has improved its consultative process, policy development has been slow compared to some of its international peers. Currently, the country has been relying on self-regulation.
On Tuesday, MPs were briefed on what the policy aims to do.
Responsible AI
Mmoto said the policy is based on 14 pillars, including education, training, industry collaboration, digital infrastructure, innovation, ethical guidelines, safety and privacy.
While the policy will be overarching, sector-specific regulations will have to be compiled, Mmoto told the committee.
On capacity and internal development, it will look at building the national AI skills list through education, training and industry operation. These will have to be enabled by the robust digital infrastructure and connectivity.
Read: How AI is rewriting the rules of consulting
The policy will make proposals on responsible AI governance, which will include addressing challenges around safety, security and privacy.
“I think I can stress this a little bit… We have seen quite a lot of this missing and disinformation, which is perpetrated by the use of AI. So, we need to make sure that we do have some guardrails to make sure that we don’t perpetuate this and we also don’t spread, you know, these deepfakes,” Mmoto said.

“And one of the areas to highlight on this is that you’ll find that the AI systems that are not trained by the local data sets most likely have bias. So, one of the things that we advocate in the policy is whenever you’re trying to train the AI systems, you need to have data sets that are inclusive and that take all the demographics in the country [into consideration].”
The document also discusses accountability in developing AI systems and how “we can take you to task if the system that you have developed can cause harm to society”, he said.
It advocates for cultural preservation and global integration, while also strengthening international collaboration and competitiveness.
“On this one, what we’re looking at doing is to ensure that we can develop language models, especially for the languages that have got fewer speakers.
“But also, we would like to use AI to digitise some of our indigenous knowledge systems, as well as the music and art to make sure that it can be used for economic development…,” Mmoto said.
A key focus of the policy is ensuring that there is always a human at the centre of the AI development.
“We need to have the human oversight, especially if we’re to look at, say, you apply for a particular service in government. You know, we can’t use AI as just a black box…,” he said.
On supporting innovation, “regulatory sandboxes” will be set up in controlled environments to test AI systems and look at associated risks.
Middle ground
Some discussions on Tuesday centred on the benefits of regulating AI.
Acting committee chair Shaik Imraan Subrathie said that in India, for example, they believe that over-regulation stifles innovation, while it’s the opposite in the EU. Last year, the Free Market Foundation warned against South Africa blindly following EU regulations on technology and AI.
Mmoto said that following a “benchmarking exercise”, the department agrees that EU regulations are concerning.
Read: African firms are all in on cloud and AI – on paper, at least
“Ours is the middle of the road in South Africa. We have to have this policy in order to make sure that we have a policy lever upon which we can stimulate economic growth, ensure that the social wellbeing, but also ensure that we position our country deliberately as a leader in innovation,” he said. — (c) 2026 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here.
