Cracking the compliance code: navigating AI implementation 


This article is part of our Opinions section.


As AI continues to dominate headlines, and businesses increasingly recognise its value, it is becoming increasingly clear that artificial intelligence will be a massive component of tomorrow’s technical landscape. The International Trade Administration (ITA) predicts that the UK AI market alone will be valued at over $1 trillion by 2035.

While the technology continues to develop and grow, the associated challenges do too; concerns around algorithmic bias, data privacy, decision accountability and job replacement are only becoming more prevalent. 

Alarmingly, only 9% of companies surveyed by Riskconnect are prepared to thoughtfully address these challenges, a concerning thought in the context of alarmist headlines like this one from Forbes (“Goldman Sachs Predicts 300 Million Jobs Will Be Lost Or Degraded By Artificial Intelligence”).

AI adoption isn’t simply a technological obstacle to be checked, it’s a financial, strategic and ethical decision that needs to be carefully planned for. The question companies are landing on isn’t simply “how do we build the tool?” Instead, companies should be asking how they will shape their holistic AI strategy.

Back to basics 

If you start running before you tie your shoes, you will trip and fall. Similarly, if you start building before establishing a clear set of ethical guidelines that align with the values of the organisation, you’re setting yourself up to fail. 

Start by working with business leaders, stakeholders and employees to define the core ethical principles that will guide the development and use of AI with the workforce – typically, these include ideas of fairness, accountability, privacy, safety and transparency. Just like with strategic cloud strategy, the time spent planning for proper implementation can prevent major headaches (and potentially costly fines) down the road. 

Once the fundamental principles have been established, cross-functional teams from across the businesses can leverage their strengths to implement the guidelines. For instance, the data science team can provide technical insights, the legal team can ensure regulatory compliance and the HR team can address internal implications and any training needs.

Through an established policy, businesses can set out rules and responsibilities for employees across the workforce and reduce the likelihood of inappropriate AI use.  

However, given the rapid evolution of AI technology, it’s crucial to regularly review and update the established guidelines. This is particularly important when considering the evolving regulatory landscape, where compliance requirements can change so quickly. 

Laying the foundations

When addressing the ethical and compliance challenges of AI/ML implementation, it’s crucial to tackle the structural issues inherent in AI models – the “black box” problem in particular. Defined as the algorithmic system of deep neural networks that remain “opaque”, or hidden, from human comprehension, the “black box” results in confusion and a lack of understanding as to how the AI model reaches conclusions. 

With this in mind, consider implementing methods and tools that make AI/ML models more interpretable, such as Local Interpretable Model-Agnostic Explanations (LIME). LIME is a versatile technique that can be applied to any ML model. It doesn’t require access to the ML’s inner workings and provides specific explanations for individual predictions. It’s also user-friendly, allowing non-experts to gain insights into a model’s predictions.  

Similarly, SHAP (SHapley Additive exPlanations) offers another model-agnostic method which can be used to explain individual predictions from AI models. It provides global insights into model behaviour across all predictions and local explanations for individual predictions, boosting interpretability at both levels. This naturally increases trust amongst users and stakeholders across the business.

In addition to adopting interpretability techniques, businesses must take accountability by conducting thorough audits. Before deployment, the audit should assess for bias or other ethical concerns. Once deployed, continuous monitoring is essential to detect and address emerging issues before they develop into a more serious concern. 

Beyond the tech

Ensuring transparency in AI adoption expands far beyond the technology itself. Businesses must foster a culture that not only tolerates but accepts and encourages the use of AI in the workplace through strategic initiatives, cultural shifts and practical implementation.

All employees should be equipped with the latest tools and resources to support AI transparency and compliance, to be able to use and implement transparent and fair AI/ML systems effectively. The HR team should be on hand to offer regular, up-to-date training on how to use these tools ethically and following compliance requirements.

Finally, it’s worth creating a well-thought-out plan to quickly and effectively respond to any AI-related ethical or compliance issues that arise. This could include establishing a point of contact for any ethical concerns or compliance breaches, developing internal and external communication strategies, defining corrective actions, and conducting additional training and/or awareness sessions as needed.

As we continue to forge ahead into the AI-driven era, businesses must strive to strike the balance between innovation, accountability and compliance. In doing so, organisations will harness the transformative potential that AI offers whilst safeguarding their employees and revenue against risks. This approach will shape a future that is both technologically progressive and ethically sound.

Worth a read

Matan Bordo
Matan Bordo

Matan Bordo got his start working for a VC fund and has since become a Product Marketing Manager at DoiT. He has contributed to TechFinitive under our Opinions section.

NEXT UP