Contributed by Mphasis
Written by Eric Winston, Executive Vice President, General Counsel and Chief Ethics & Compliance Officer, Mphasis

AI (Artificial Intelligence) has come a long way from being a marginal, edgy tool in enterprise IT to becoming what it is now—a powerful central technology. Businesses across domains are deploying it to enhance core functions.  According to a recent report by the International Data Corp., AI is expected to drive global revenues from around USD 8 billion in 2016 to more than USD 47 billion in 2020, across a wide range of industries.

Companies in the financial services industry are also growing increasingly aware of AI’s distinct capabilities. In its 2017 Digital IQ Survey, PwC found about half (52%) of those in the financial services industry making ‘substantial investments’ in AI, while 66% expected to make these investments in the next three years. Further, the survey also found that nearly three out of four (72%) business decision makers said that ‘AI will be the business advantage of the future.’

Companies in the financial services industry are now starting to recognize the exponential and transformational power that AI can bring to bear on the compliance function. This is due to several factors.

With data growing at exponential rates, all organizations, especially those handling voluminous amounts of data, such as banks, are struggling to maintain data safety. They address this challenge by creating ever more complex rules around compliance. AI, along with Machine Learning (ML) technologies and Natural Language Processing (NLP) solutions, promises such companies a powerful tool to help them meet these complex regulatory requirements. Once programmed with a firm’s specific needs, AI has the unique capacity to sift through reams of documents, whether emails or financial transactions, quickly, flagging those that require closer attention, while ‘containing’ those which demonstrate dubious or malicious intent.

As a result, several financial institutions, including banks and investment management firms, have cautiously deployed AI-driven compliance systems. They are cognizant of AI’s ability to help them expedite, standardize, and streamline compliance regulation.

The Commonwealth Bank of Australia (CBA), for example, has used NLP and AI in a reg-tech pilot to help convert 1.5 million paragraphs of regulation text into compliance obligations. The results of the pilot revealed by the bank earlier this month showed that AI could ‘crunch’ the paragraphs into actionable compliance with up to 95 percent accuracy. In addition, use of the technology dramatically reduced the amount of time to complete tasks: what would have otherwise taken six months of manual work was finished within two weeks.

But AI can do more. It can not only screen transactions in real time, advising of the potential for fraud, money laundering, insider trading, and third-party risk, but also alert financial institutions to early indications of market manipulation.

The Hong Kong stock exchange is reported to have become the first exchange in Asia to deploy AI to detect and stop unusual trading activity. The deployment has been described as part of a larger move by the exchange to help ‘root out’ stock price manipulation. The way the surveillance software works is by looking for unusual occurrences in trading activity. It then combines these with analysts’ assessments of trading data to isolate further scrutiny.

However, given the strict liability standards such as ‘zero tolerance’ for error and the current preference to have false positives analyzed by human intelligence, how can companies consider deploying AI when it may allow transactions to slip through the cracks? For AI to gain ground in compliance-related deployments, the officer in charge of executing it will demand end-to-end visibility, confidence in the technology’s auditability, and assurances that AI will adhere to the finance sector’s zero-error tolerance.

These requirements can be met if organizations are able to provide a few pre-requisites. The first is buy-in from senior leadership. Unless company CEOs believe in AI’s bankability, nobody else, whether regulators or customers, will. Second is the need to invest in a human layer of intelligence to counter, check and address the randomness of algorithmic oversights. Third is a commitment to a ‘test and prove’ approach to accelerate outcome-based AI deployment.

Once these conditions are met, AI can deliver great efficiency in compliance. This is because among a host of next-gen technologies, AI can quickly and accurately identify transactions and label them in real time while preventing human error. And it can accelerate what legal and compliance departments are always trying to do: shift their workforce and ‘human brain power’ away from time-consuming commodity work to higher-value advisory and analytical work.

But most of all, enterprises in the financial services industry must acknowledge that AI is coming. Those who embrace it early by testing and getting comfortable with its deployment will gain a tremendous advantage for early adopters. And those companies not looking at it now are already behind the curve.

About Mphasis

Mphasis (BSE: 526299; NSE: MPHASIS) applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis’ Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2 TM=1) digital experience to clients and their end customers. Mphasis’ Service Transformation approach helps ‘shrink the core’ through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis’ core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. Click here to know more.