
Anyone using a generative AI chatbot to talk through a legal problem – whether a disgruntled consumer, a founder worried about a contract or an executive gaming out a dispute – should assume the resulting chat logs will be pulled into court.
Unlike conversations between a lawyer and their client, conversations with an AI chatbot are not privileged. Any sensitive information disclosed in an AI chat could be exposed during a trial, potentially weakening the client’s legal position.
“Legal professional privilege in South African law protects confidential communications between a client and a legal adviser acting in a professional capacity for the purpose of obtaining or giving legal advice, and certain communications created for litigation,” said Ahmore Burger-Smidt, director at Werksmans Attorneys. “Communications with an AI tool do not meet those criteria, and third-party disclosure typically waives privilege.”
The warning follows a February 2026 ruling by US district judge Jed Rakoff in Manhattan, who ordered Bradley Heppner, the former chairman of bankrupt financial services firm GWG Holdings, to hand over 31 documents generated using Anthropic’s Claude chatbot. Heppner, who faces securities and wire fraud charges, had used the tool to prepare case material to share with his lawyers. Rakoff found that no attorney-client relationship existed, or could exist, between an AI user and a platform such as Claude. The prosecutors got their material.
On the same day, Michigan magistrate judge Anthony Patti reached the opposite conclusion in Warner vs Gilbarco, ruling that a self-represented plaintiff’s interactions with ChatGPT were protected by law.
Consequences
Burger-Smidt said the South African position mirrors the Rakoff ruling rather than the Patti one. She warned that AI use should be “properly framed” – limited to helping lay people understand the basics of the law, and to improving legal operations inside companies. Lawyers are held to a higher standard.
“Lawyers remain responsible to the court and the client for any AI-assisted output, with potential personal consequences for negligence and sanctions if AI hallucinates authorities,” said Burger-Smidt. “As a matter of privacy, AI chats constitute personal information and should be processed in accordance with the Protection of Personal Information Act, while remaining susceptible to lawful interception and disclosure processes.”
Read: South Africa’s draft AI policy is a bureaucrat’s dream
Hallucinations are another risk of using AI for legal advice. After non-existent case law was cited in two separate South African cases, the Legal Practice Council moved in July 2025 to develop a governing framework for AI use by legal professionals.
According to Lucien Pierce, director at PPM Attorneys, AI use poses other legal risks. “The advice could be outdated, derived from the wrong jurisdiction or based on a fabricated court decision. We have seen this happening time and again, where even lawyers have fallen victim to accepting the outputs of AI tools and chatbots as being the truth.

“Whether information input into an AI tool can be subpoenaed depends on who input the information and what type of AI tool was used. The riskiest tools are those that are typically free. It is important to first consider the terms and conditions of the service you use,” Pierce said.
He warned that free-to-use AI chatbots typically make it clear that data uploaded onto the platform will be used to further train the underlying models. Text, photos and video uploaded by a user could end up in the public domain. The same risks apply whether the platform is used by a lay person, a corporate entity or a lawyer, he said – which is why law firms increasingly need to train employees on the appropriate use of AI tools.
AI use can create legal complications beyond privilege and discoverability. In 2022, Air Canada’s AI chatbot promised a customer a discount that was not available to them. When the airline argued that the chatbot was “responsible for its own actions”, a tribunal in 2024 ordered it to refund the customer to the tune of C$812 (R9 750 at the time of publication).
Read: AI misuse shakes South African courtrooms
“This highlights the need for organisations that use AI agents and bots to ensure that they are monitored. When major decisions, such as those that result in legally binding commitments, are made, a human should be kept in the loop,” Pierce said. – (c) 2026 NewsCentral Media
Get breaking news from TechCentral on WhatsApp. Sign up here.
