Turning technology against criminals

Speech by Megan Butler, Executive Director of Supervision - Investment, Wholesale and Specialists at the FCA, delivered at the Anti-Money Laundering TechSprint, London

fca board megan butler 340 180.jpg

Speaker: Megan Butler, Executive Director of Supervision - Investment, Wholesale and Specialists
Event: Anti-Money Laundering TechSprint, London
Delivered: 22 May 2018
Note: this is the speech as drafted and may differ from the delivered version

Highlights:

  • Data and technology can help detect and disrupt criminal activity. 
  • Staff in UK banks and other firms are playing a frontline role in combating financial crime.
  • Phishing and identity theft are now cited by firms as the most widespread fraud risks they face.
  • The next big step is to apply intelligent technologies like AI, robotics, natural language processing and machine learning.

It’s a pleasure to join you and to welcome you to our fifth TechSprint. Let me start with a few words of appreciation. I know that many of you have given up a significant amount of time to attend this conference. A point that is particularly true for those who’ve travelled from outside the UK. So thank you all – in advance – for your contributions.

As will become clear over the next few days, the FCA is supportive of firms using technology to improve compliance. But we also recognise that there is reticence around innovation. Especially when it’s judged to add regulatory risk.

Our hope is that this event will alleviate some of those fears because technology is increasingly essential to combatting financial crime. And to remind people of just how important an issue this is, you only need to look at what has been covered in the UK press over the past few days.

The FCA itself, as you’d expect, does not tolerate facilitation of criminal activity by the financial services we regulate. Nor do we do shy away from regulatory and criminal enforcement. But in common with firms, we do have a public duty to explore all opportunities to combat crime. Money laundering and fraud included.

Regulators have used traditional supervisory tools for many years. We now need to think differently.

Regulators have used traditional supervisory tools for many years. We now need to think differently.

One way of doing that is events like this that help us turn technology against criminals. Another is to recognise the value of data. My plan this morning is to look at both points. But let me start with data because it is integral to understanding the nature of challenge we deal with.

Data on UK financial crime

The problem policy makers face is essentially this: criminals don’t advertise their success and the nature of money laundering, in particular, makes it extremely hard to measure, resulting in an intelligence gap. To help address this issue, the FCA launched an annual financial crime data return at the end of 2016. Sending it to several thousand UK-based firms, including all the major banks and life insurers.  

The responses gave us a number of different insights. But a general theme is that staff in UK financial services are playing a frontline role in combating crime. According to the returns, employees at all levels raised more than 920,000 internal suspicious activity reports to their money laundering reporting officers.

Firms also sent 2,117 terrorism related reports to the National Crime Agency. And a total of 13,000 restraint orders were in place to freeze customer accounts, of which 3,600 had been made during the previous year. In addition, more than 1.1m prospective customers were refused services amid financial crime concerns. And a further 370,000 existing customer relationships were exited for the same reason.

Some caveats are in order. Smaller firms were not required to respond, so the figures do not cover all businesses the FCA supervises. Also, many international business in the UK are structured so their overseas operations are regulated abroad – meaning they do not feature.

That said, around 2,100 firms responded to the survey up to the 31st December, 2017.  Meaning we can read the responses as representing a strong collective view, with each firm submitting a year’s worth of data. We expect to publish more details around these results later. But there are two points to make now.

First, it is important to recognise just how much of the direct threat to customers from financial crime has moved online. Phishing and identity theft were cited by firms as the most widespread fraud risks they now face.

The most rapidly increasing threats – with the exception of malware attacks - were phishing or variants of it. Reflecting the fact that cybercrime now accounts for nearly 50% of all recorded crime in the UK.

The second point to make is that banks undertake a lot of activity to combat financial crime. An issue I raise because we need a balanced public debate on this topic. No-one is pretending that financial services aren’t abused by criminals. They emphatically are.

So the primary question is around efficiency. How can firms get better at detecting criminals and disrupting malignant activities like people trafficking, narcotics and fraud?

But the data suggests that most financial institutions are not complacent about the risk.

So the primary question is around efficiency. How can firms get better at detecting criminals and disrupting malignant activities like people trafficking, narcotics and fraud?

Technology improving detection

And this brings me to the topic of how regulators can make firms more comfortable about using technology to deal with financial crime. A few issues to flag. First, financial institutions are judged by the public on their ability to combat crime. Not their spend doing it.

In other words, there’s no expectation on firms to spend money just to ‘show willing’ or as a way of ‘virtue signalling’. A point that is especially important given that British banks alone spend some £5bn a year combating financial crime. One billion more than the UK spends on prisons.

Please do be aware though that the international priority is overwhelmingly on efficacy. Not overheads. Cheap technologies that aren’t tested and don’t work properly are not acceptable alternatives to old, expensive tech systems that do. And this leads me to a second important issue about what is sometimes described as a ‘zero failure’ approach among regulators.

I can only repeat the fact that we are primarily interested in outcomes and that we operate in the real world.

So if there are methods, innovations, or technologies that help you combat crime, tell regulators about them – and do not be afraid to move first. Excessive risk aversion is not going to help us win an arms race that is so heavily rooted in automation. We need to turn technology against criminals. 

Our own analysis suggests that transaction monitoring is the area with the most potential. But it is by no means the only one. Onboarding, maintenance, client screening and reporting - among other issues – are frequently cited.

At the moment, we see a lot of firms using compliance technology to automate existing processes so they keep on top of volume. Particularly reducing false positives in transaction monitoring. 

But a lot of banks still employ thousands of investigators to manually review high-risk transactions and accounts. ‘Checkers checking the checkers’ – as the saying goes.

The next big step is to apply intelligent technologies - like AI, robotics, natural language processing and machine learning - so that firms can spot suspicious transactions in real time from unstructured account and transaction data.

That is obviously not simple – for a number of reasons.

First, there are practical obstacles. A lot of large firms, for example, need to integrate legacy systems – in some cases dating back to the seventies. Others face significant clean-up exercises to address issues around fragmented, poor-quality data.

Second, there are important questions around issues including bias and transparency. Proprietary algorithms, for instance, can become black boxes. Meaning developers themselves don’t necessarily know why a machine is making the recommendations its making.

developers themselves don’t necessarily know why a machine is making the recommendations its making

In fact, last year the Financial Stability Board made a point of saying that communication mechanisms used by machine learning may be ‘incomprehensible to humans’.

So under these circumstances, what information and detail is it reasonable for regulators to request? And what leeway is there for firms if something goes wrong?

The question we get asked by financial institutions is essentially this: ‘will you let me off the hook if we introduce new tech’. The answer to that is no.

But that is not to say that new technology can’t significantly reduce your risk exposure if you implement it in a way that you would any other – involving proper testing, governance and management.

Don't be afraid to use technology

A number of organisations in this room have already started doing this extremely well. Among them colleagues at the Australian Transaction and Analysis Centre.

But your input is essential if we want these success stories to become more ubiquitous.

So by way of conclusion, please do not be afraid – wherever you are based – of working with your regulators on systems that you feel might offer genuine benefit. I am conscious that we hear messages so often about public duty that they can sometimes lose their impact.

But events like this can genuinely make an extraordinary social difference – in ways that most of us cannot even begin to comprehend.