ARTICLE
9 July 2025

Balancing Risk And Reality: Using AI At Work

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
If you thought AI in the workplace was a problem for tomorrow, think again. At Lewis Silkin's recent Managing an International Workforce conference, Bryony Long, Alexander Milner-Smith...
United Kingdom Technology

If you thought AI in the workplace was a problem for tomorrow, think again. At Lewis Silkin's recent Managing an International Workforce conference, Bryony Long, Alexander Milner-Smith, Alex Bazin (Lewis Silkin), and Inger Verhelst (Claeys & Engels) tackled the big question: how do you balance the risks and realities of AI at work?

Why Good AI Governance is Important

Like many things AI-related, AI governance has become a buzzword in recent years. However, good governance underpins both legal and regulatory compliance. As the panel explained, you can't pull compliance out of thin air. With the EU AI Act now beginning to take effect – notwithstanding reports of potential delays – and other existing legislation requiring careful consideration of AI it's fundamentally important to have a governance framework which helps rather than hinders your organisation's compliant adoption of AI.

But good governance isn't (just!) for the protection of the business. Governance, and AI literacy, helps build trust with employees to use the tools in a safe way. AI isn't a precursor to efficiency – that only comes after trial and error. Governance can facilitate this by ensuring it is done safely. Employees are encouraged to use the tools but within the necessary guardrails, for example explaining to employees why they can't put confidential information into the free version of any online AI tool helps them understand both what they should not be doing but also why and how they can do the same thing with licensed products. Building the trust and a collaborative culture helps ensure safe use but also that both businesses and their employees are getting the most out of the technology.

Equally, a well-governed AI environment can offer competitive advantages. Unlike other tech trends, AI is becoming commonplace in people's lives. Enabling and encouraging the use of AI will allow employees to reduce the menial tasks and focus on the more meaningful parts of their role they enjoy.

Shadow AI: The Wild West of the Workplace

Without effective governance and giving employees a way to use AI safely, there is a significant risk of the workforce using 'shadow AI' – i.e. unauthorised or unmonitored use of AI by employees. This isn't a new concept – IT professionals have been grappling with the use of unauthorised tech for years – but given the nature of AI which can ingest huge amounts of data, this does present a number of risks.

For example, if the data being input into an AI tool is personal data this could be a breach of data protection laws and if someone is using the outputs of a tool without appropriate licenses this could infringe IP law. As mentioned, trust in AI use is important but it's a two way street and employees using shadow AI could unknowingly be putting themselves at disciplinary risk without realising.

There is no perfect solution to 100% avoid the use of shadow AI but governance is a key mitigation tool. For example, employees may not understand the difference between free and enterprise versions of the same AI tools or understand implications from using a tool on their personal device for a work question. You can of course block certain websites (and monitoring does of course have a valid place) but AI literacy can also help build a broader understanding of AI opportunity and risk. Equally, policies are important to guide people on what tools they can use, how they can use them, and the rules on their use of them. These factors won't turn a workforce into AI experts, but it will give them enough to understand what they can/cannot do and why.

Jurisdictional Jigsaw

All jurisdictions are taking different approaches to regulating AI. Many countries are increasingly innovation focused while others are implementing prescriptive rules. However, the core principles and risks remain the same. Security is security, transparency is transparency, etc. There may be local nuances, of course, but a global organisation cannot adopt 20 different governance structures. It can, therefore, be effective to adopt a principles focused jurisdiction agnostic approach (with escalations as necessary for said nuances).

That said, it is worth remembering that regulatory interpretation of risk can vary. Take DeepSeek, for example. Some regulators banned the AI Assistant incredibly quickly, before assessing it fully, whereas others issued warnings and communications on risks or commenced investigations first (for more information see our previous article).

In any event, while regulatory responses may differ the risks of shadow AI are universal and jurisdiction agnostic governance can be an effective way to manage those risks.

Final Thoughts

AI isn't going away, and neither are the risks – it is something to be addressed here and now. Whether it's ensuring compliance with the EU AI Act, managing the risks of shadow AI, or navigating a patchwork of global regulatory responses, effective governance is pivotal.

It is, however, important to remember that it isn't just about risk mitigation. It's also about enabling responsible innovation. With the right governance, policies, and employee engagement, AI can enhance productivity, support ethical decision-making, and even strengthen your brand.

It's time to take stock and be honest about where your AI governance stands today – and think ahead to where it needs to be tomorrow.

Remember, the best AI strategies are not just built on rules, but on trust, transparency, and collaboration.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More