ARTICLE
4 March 2026

Polished Complaints, Cloudy Truths: AI-generated Grievances In The Workplace

E
ENS

Contributor

ENS is an independent law firm with over 200 years of experience. The firm has over 600 practitioners in 14 offices on the continent, in Ghana, Mauritius, Namibia, Rwanda, South Africa, Tanzania and Uganda.
Modern Human Resources ("HR") teams are witnessing a new phenomenon: employee grievances drafted by artificial intelligence. Tools like ChatGPT enable workers to produce complaint letters...
South Africa Employment and HR
ENS are most popular:
  • within Accounting and Audit, Consumer Protection and Real Estate and Construction topic(s)

Modern Human Resources (“HR”) teams are witnessing a new phenomenon: employee grievances drafted by artificial intelligence. Tools like ChatGPT enable workers to produce complaint letters that read as if crafted by a legal professional-complete with formal language and legal references,even when the employee's actual grasp of those laws or underlying facts is shaky. These grievances are often eloquently written but may misstate legal principles or misrepresent facts, creating a unique challenge for HR and legal departments.

AI enters the grievance process: A growing trend

Picture this: an HR manager opens a grievance letter and finds five pages of flawlessly written text, replete with legal phrases-far more elaborate than anything the employee has written before. When the manager calls the employee in to discuss, they seem hesitant to elaborate and insist on handling everything in writing, raising suspicion that the letter may have been machine-written.

This scenario is no longer hypothetical. In the UK, employment lawyers reported an “explosion” of AI-generated grievances in 2024–2025. In one case, a grievance letter cited nine legal cases, but only two were real-the rest were fictitious citations fabricated by AI. South African employment lawyers are increasingly encountering these cases too.

Why are employees turning to AI? It gives them clarity and turns confidence-transforming emotional, disjointed messages into coherent narratives. Employees who feel intimidated by formal writing now have a tool to articulate issues in almost legalese. AI can also infuse complaints with authoritative language, making employees feel their concerns will be taken more seriously.

But these benefits come with significant downsides. AI models do not actually know the truth, they generate plausible-sounding text that sometimes includes false “facts” or incorrect legal statements. An employee may mention "my manager yelled at me once,” and the AI might generate: “I am writing to report sustained harassment and a toxic work environment, which constitutes constructive dismissal.” Such phrasing might not reflect reality, and the employee may not realise the legal weight of terms the AI chose.

Legal implications: Whistleblowing, good faith, and the PDA

When a grievance alleges serious wrongdoing-corruption, illegality, health and safety dangers, discrimination-it could qualify as a “protected disclosure” under South Africa's Protected Disclosures Act, 2000 (“PDA”). The PDA protects whistleblowers from retaliation when they report wrongdoing in good faith.

However, the PDA sets a clear threshold: the employee must make the disclosure in good faith and hold a genuine belief that their employer engaged in wrongful conduct. The law does not protect employees who deliberately make false allegations or act maliciously.

This raises a critical question: Can a complaint drafted by AI meet the “genuine belief” test? An AI has no beliefs-it just generates text. So, the focus returns to the employee's state of mind. If an AI-written letter labels actions as “fraud” but the employee does not actually believe those allegations, the disclosure likely fails the good-faith requirement. In a worst-case scenario, an employee could face disciplinary consequences for relying on AI-exaggerated claims.

On the other hand, employees can use AI to draft a genuine whistleblowing report and still be protected-provided they believe in the truth of what the AI wrote. HR should ensure employees confirm they stand by any grievance's content, perhaps by requiring a signature or affirmation of truthfulness.

How HR should respond: Practical steps

Dealing with an AI-generated grievance requires a balance of open-mindedness and diligence. Here are practical recommendations for HR practitioners:

  1. Treat it like any other grievance: Take the complaint seriously. Do not dismiss a letter just because you suspect AI involvement. Follow your normal grievance procedure-the employee's core issue may be very real even if AI-embellished.
  2. Meet with the employee: An in-person meeting is crucial. Ask the employee to describe events in their own words to get past any confusing AI verbiage and grasp the real problem. This also allows the employee to confirm they stand by the allegations.
  3. Focus on the facts: During your investigation, strip away extraneous flourishes and get to the heart of the matter. If the letter cites a violation of the Employment Equity Act, verify what actual incident the employee is referring to. By anchoring the discussion in facts, you can assess the claim accurately regardless of how it was written.
  4. Consult legal if needed, but do not overreact: Have your ER or legal advisors review grievances with legal language or threats of action. However, avoid treating every AI-laden grievance as a lawsuit-in-waiting-content generated by AI may sound more dire than the reality.
  5. Update policies: Review your grievance policy in light of AI. Consider stating that while employees may use tools to help draft complaints, they will not be excused for false information and may be asked to confirm the contents are truthful. Some organisations now ask employees to disclose AI assistance or sign a declaration.
  6. Train your team: Ensure HR staff and line managers understand AI's capabilities and pitfalls. They should recognise potential AI-generated text and respond appropriately-not to catch employees out, but to manage each case properly.
  7. Maintain a human-centric approach: Emphasise the human element in resolving workplace issues. "The grievance process is, at its core, a human one… While AI may help articulate concerns, it cannot-and should not-replace genuine dialogue, empathy, and emotional intelligence." In every grievance, whether AI-involved or not, the aim is to understand the employee's perspective and work toward a fair resolution.

Bigger picture: AI and the future of employee relations

The rise of AI-generated grievances is part of a broader trend. We are likely to see more employees using AI for all sorts of correspondence. This could lead to an uptick in written complaints because it is now so easy to generate lengthy, well-articulated documents.

On the flip side, if AI helps employees articulate concerns they previously hesitated to raise, employers gain more insight into workplace problems. It is better to know about discontent than to have silent, festering issues. HR practitioners should see AI grievances as signals of issues employees want addressed.

Some adjustments will be necessary. Workplace dispute resolution may involve more written documentation and require sharper analytical skills to sift truth from AI-added fiction.

For now, the best approach is cautious adaptability. Embrace tools that help employees speak up but educate them on proper use. In South Africa's legal context, the substance of a complaint is far more important than its style-polished language will not rescue a baseless claim. Continue fostering a culture where employees feel they can raise issues informally and honestly, without needing technology to “package” their grievances.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More