
Executives do not need another pep talk about artificial intelligence. They need proof that their investment will pay off without exposing the organisation to risks they cannot defend.
The latest global study from SAS and International Data Corp (IDC) makes the trade-off clear. Confidence in generative AI is rising, yet only about 40% of organisations are investing to make systems demonstrably trustworthy through governance, explainability and ethical safeguards.
Those that do are about 60% more likely to double the return on investment of their AI projects. In other words, trust is the first step to returns. So, if we want ROI, we must earn it.
The confidence gap
Global evidence shows that generative AI pays when it is integrated with discipline across operations. A report describes substantial ROI where organisations embed the technology into core workflows and measure what matters. The message is that value follows managed adoption, not ad-hoc experiments.
In South Africa, the pace of adoption is running ahead of policy. Corporate leaders describe generative AI as the fastest-moving digital trend in the enterprise, even as control frameworks lag. Local reporting also points to rising shadow AI use as teams experiment without formal guidance. These conditions increase the risk that projects stall before production, or trigger governance concerns that slow scale-up.
The SAS and IDC findings suggest that businesses must treat trust signals as leading indicators of ROI. When leaders can show how a system is governed and explained, adoption increases, escalations fall and funding follows evidence rather than enthusiasm.
What trustworthy generative AI looks like in practice
- Guardrails where users meet the model. Define what the model may do, how it grounds responses, what it logs and how unsafe outputs are blocked. For higher-impact actions, keep a person in the loop. This turns explainability into something visible to users and auditors, not just engineers.
- Provenance and documentation. Track data sources, record fine-tuning choices, and publish model notes that set intended use, evaluation results and known failure modes. This is how leaders answer the board’s question “why did the system say that” with evidence.
- Monitoring you can brief the board on. Run live evaluations for quality, bias, drift, and abuse. Log rationale for allowed or block decisions. Rehearse rollback. If a programme cannot fail safely, it cannot scale.
Business leaders need to understand that guardrails reduce complaint rates and compliance exceptions, provenance lowers audit time and monitoring cuts remediation costs. These are the budget-line items that make ROI visible.
Local teams are experimenting fast, which is positive. The risk is that policy, ownership, and measurement do not keep up. The path forward is to publish the trust contract before scale. This must include what data is allowed, what will be grounded, who can override, what will be logged and how decisions will be reviewed.
90 days to trustworthy generative AI
In the first 10 days, frame the project and write the trust contract. Choose one or two high-velocity use cases with clear ownership, establish the KPIs you want to improve and close the obvious data gaps. Also, define the trust evidence you will require for every output (citations, scores, rationale), the harms you are guarding against, who can override and what will be logged.
Over the next month, build the smallest solution that can prove value with guardrails. Ground responses in approved sources, enable prompt logging and safety filters and run red-team tests against real edge cases. Agree upfront what “good enough to go live” means for both ROI and trust thresholds.
By the midpoint, shift to the production path for scrutiny. Switch on monitoring for drift and abuse, publish the model documentation, complete data-lineage records and rehearse rollback. If reviewers cannot trace an answer to sources and policy, you are not ready.
In the final stretch, release in a controlled way and prove it. Start with a small segment. Report weekly on the business KPI and the trust KPIs side-by-side (e.g., explainability coverage, proportion of grounded responses, escalation handling times). Iterate visibly. Scale only when the decision owner signs both.
Where SAS fits in
SAS is focusing its work with customers on the governance and measurement that make GenAI investable. That includes advisory on policy and ownership, platform features for explainability, bias checks, lineage and monitoring, and a product mindset that treats AI as a service with commitments, not as a set of demos.
For leaders who want a structured path, start with SAS and IDC’s trust findings and the AI Blueprint playbook.
There is also growing interest in agentic AI and more autonomous systems. The promise is speed and scale. The condition is stronger governance. Before autonomy, organisations need clear rules of engagement, continuous oversight and transparent fallbacks. The companies that do this work now will be ready when autonomy moves from pilot to production.
The takeaway for local boards
Trust is the gateway to ROI. To accomplish this, companies must govern first measure early, ship safely and then scale. Where South African teams do this, adoption rises and budgets follow evidence. Where they do not, projects drift into shadow use or stall before production.
The data is clear: if you want the returns everyone talks about, fund the trust that makes those returns possible.
- The author, Joy Naidoo, is the head of professional services Africa, EMEA consulting, SAS
- Read more articles by SAS on TechCentral
- This promoted content was paid for by the party concerned
