Chris Harris, EMEA Technical Director, Data Security at Thales: “Deepfake technology has escalated fraud to new heights”

When Chris Harris started his IT career back in the 1990s security threats came from “script kiddies” and viruses spread via floppy disk. As EMEA Technical Director, Data Security at Thales, he faces rather different security challenges.

“Fraud has always been an issue in financial services,” Chris told us, “but deepfake technology has escalated this threat to new heights and could overwhelm financial institutions.” And let’s not even talk about the rise of quantum computing and what that could do to encryption.

Fortunately, the principles of security stay the same. As new technological threats rise we must take advantage of technological defences, but also keep employees informed and vigilant. That means guarding against phishing rather than floppies, but awareness is everything.

Here, Chris shares findings from Thales’ 2024 Data Threat Report, including some disturbing figures about compliance. Plus, how to stay abreast of emerging threats – including those created by generative AI.

Could you please introduce yourself to our audience and share how you ended up working in cybersecurity?

I’ve worked in cybersecurity for almost 30 years, originally starting small with the development of smart cards before moving on to larger and more comprehensive security solutions to protect key data and identities. My background working at a smaller firm provided me with great opportunities to gain experience and develop my passion for cybersecurity.

I have grown with the security industry throughout my career. Considering the pace at which our industry has developed, it has meant my time in cybersecurity has been varied and fast-moving.

I’m really excited to be working in cybersecurity as we begin to enter the post-quantum era. In particular, as someone who has seen the industry develop at such a rapid pace, I am excited to see how we continue to adapt. I am excited to embrace emerging technologies and to see where the industry takes me over the coming years.

What are some cases of deepfakes being used that particularly concern you?

The case of the Hong Kong-based finance worker who was tricked into paying fraudsters $25 million is a huge concern. Deep fake technology was used to convincingly pose as the company’s CFO on a conference call. Fraud has always been an issue in financial services – but deep fake technology has escalated this threat to new heights and could overwhelm financial institutions – which could in turn impact the wider economy, not least erode customer trust.

On a wider scale, there’s the use of deep fake videos of politicians and trusted figures – which are becoming increasingly prevalent. These manipulated videos can spread misinformation, influence public opinion, and potentially sway election outcomes. For instance, deepfakes of global leaders, including the UK Prime Minister, have appeared on social media platforms, raising concerns about their impact on voters.


Worth a read: Don’t call it quishing but, please, do take it seriously


What do you think are the best approaches to combating deepfakes?

Combating deepfakes requires a multi-faceted approach. Firstly, advanced detection technologies are crucial. By investing in innovative tools that analyse digital content for signs of manipulation, organisations can identify and flag deepfakes before they cause harm.

Public awareness and education also play a vital role. Educating the public about the risks associated with deepfakes empowers individuals to recognise and report suspicious content.

Thirdly, collaboration between industry and government is essential. By working together, tech companies, governments, and regulatory bodies can develop standards and policies that effectively address the challenges posed by deepfakes.

Then there’s the importance of adopting a zero-trust mindset towards digital content. This approach involves scrutinising all media for authenticity, helping to distinguish between genuine and synthetic content.

As well as investing in technology to detect and flag deepfakes, it may also prove beneficial in the coming years to research technology that can provide a watermark or proof that important or high-profile videos are in fact real.

In a similar way to how documents can be digitally signed today, it would be great to see a ‘seal’ or ‘signature’ that can provide consumers with trust in the product.

Ransomware attacks are more prevalent than ever. According to our 2024 Data Threat Report, 28% of businesses experienced an attack, up from 22% the previous year. Additionally, 93% reported an increase in attacks, with ransomware consistently cited as a major growth category.

Cloud assets remain prime targets for threat actors, including SaaS applications, cloud-based storage, and cloud infrastructure management. Human factors continue to be significant contributors to data breaches.

Despite the rise in attacks, less than half of enterprises across all verticals and sizes have a legitimate ransomware response plan, making response efforts challenging.

A concerning trend this year is that enterprises failing to meet compliance requirements are ten times more likely to experience an attack. With over 43% of companies in a compliance deficit, this issue has become alarmingly common in the industry.


Related: Dan Middleton, Vice President UK & Ireland at Veeam: “Cybercrime is now an industry. They have ERG policies”


What are some ransomware prevention strategies you believe every business should adopt?

Every business should adopt a multi-layered approach to ransomware prevention. First, data discovery and classification are crucial. Identifying and classifying sensitive data helps prioritise protection efforts. Implementing robust identity and access management controls, such as multi-factor authentication (MFA), ensures that only authorised users can access critical systems.

Encryption is another key strategy. Encrypting data at rest and in transit protects it from unauthorised access, even if attackers breach the network. Additionally, regular backups and ensuring they are stored securely offline can help businesses recover quickly from an attack without paying a ransom.

Employee training is essential to prevent phishing attacks, which are a common entry point for ransomware. Educating staff on recognising and reporting suspicious activities can significantly reduce the risk.

Finally, adopting advanced threat detection and response solutions can monitor and block unauthorised encryption attempts in real time. This proactive approach helps detect and mitigate ransomware threats before they cause considerable damage.

By integrating these strategies, businesses can build a resilient defence against ransomware attacks and safeguard their critical data.

Digital Operational Resilience Act (DORA)

Thales’ solutions can help Financial Institutions and third-party ICT providers comply with DORA by simplifying compliance and automating security reducing the burden on security and compliance teams. They help address essential cybersecurity risk-management requirements, covering ICT Risk Management and Governance, Incident Reporting, and ICT Third Party Risk Management. Download this guide from Thales to find out more.

What is it about generative AI that makes it so prone to exploitation by threat actors? Conversely, how can it be used for good?

In some instances, generative AI’s rapid evolution and accessibility lower the barrier for cybercriminals to launch sophisticated attacks. Additionally, the vast amount of data required for training generative AI models can expose sensitive information if not properly secured.

That said, generative AI can improve enterprise security by enhancing threat detection and response. It leverages machine learning to identify patterns and anomalies, enabling quicker and more accurate identification of potential threats. AI-driven systems can automate routine security tasks, reducing the burden on human resources and allowing teams to focus on strategic initiatives. Additionally, generative AI can predict and pre-empt security risks, providing proactive protection against emerging threats. By integrating generative AI, organisations can significantly improve their security posture, ensuring robust protection for their applications, APIs, and data.

However, findings from our recent Digital Trust Index found customer fears over AI are already rife – nearly six in ten global consumers are nervous that brands’ use of AI will place data at risk (57%), while just under half (47%) of users do not trust companies to use generative AI responsibly. For any AI deployment, organisations need robust security measures and ethical guidelines to mitigate against any risks and ensure the safe deployment of this technology.

Avatar photo
Tim Danton

Tim has worked in IT publishing since the days when all PCs were beige, and is editor-in-chief of the UK's PC Pro magazine. He has been writing about hardware for TechFinitive since 2023.

NEXT UP