Artificial Intelligence – Racing up the Mountain without any Harness

Artificial Intelligence – Racing up the Mountain without any Harness

  • Export:

By Akber Datoo, founder and CEO of D2 Legal Technology (D2LT), and professor at the University of Surrey

The speed with which businesses globally have adopted generative AI tools over the past 12 months has been extraordinary. It took Netflix three-and-a-half years to achieve one million customers and Twitter two years, while Instagram achieved that number in just two-and-a-half months. ChatGPT hit the million-user mark in just five days. The business implications are significant and cannot be ignored. According to the latest annual McKinsey Global Survey, while one-third of organisations are already using generative AI in at least one business function, only a few of them have put in place robust AI usage frameworks.

While many are concerned about the impact of AI on white collar jobs, Akber Datoo (CEO) and Jake Pope (Consultant) at D2 Legal Technology, argue that financial institutions should have far bigger concerns about the commercial and regulatory implications of unmanaged and immature use of AI on internal processes. Concerns should include the state of existing data – in terms of the inputs into an AI system - data governance in downstream systems, and the training set data utilised in Large Language Models (LLMs). There is an urgent need to assess and mitigate possible risks and create robust policies for the managed adoption of AI across an organisation.

Misplaced Fears

The global financial market is at the vanguard of AI adoption. The 60 largest North American and European banks now employ around 46,000 people in AI development, data engineering and governance and ethics roles, with as many as 100,000 global banking roles involved in bringing AI to market. Some 40% of AI staff in these banks have started their current roles since January 2022, underlining the speed with which organisations are ramping up their AI adoption. Meanwhile the UK fears its banks could be falling behind their US counterparts, with American giant JPMorgan hiring twice as many AI-related jobs as any of its rivals.

This AI hiring fiesta is causing serious concern amongst existing employees, with many worrying they will be displaced. How long, they wonder, will it take a generative AI tool to learn the skills and knowledge individuals have taken years to attain? Indeed, those working in the technology and financial-services industries are the most likely to expect disruptive change from generative AI. Fears have been further fuelled by organisations such as the World Economic Forum which claims that 44% of workers' core skills are expected to change in the next five years.

But such fears fundamentally overlook the far more significant concerns regarding the way organisations, especially those within banking, are approaching AI adoption: far too few are actively considering, in contrast to the promise of these tools, the significant business risks. Generative AI is still in a very immature phase. If organisations remain bedazzled by the possible efficiency and cost savings on offer and fail (through lack of policies, procedures and training) to consider the risks of discrimination, bias, privacy, confidentiality and the need to adhere to professional standards, the outcome could be devastating.  

Lack of Strategic Oversight

Organisations are not taking the time to consider AI usage policies. They are not drawing clear distinctions between the personal and professional use of AI.  Indeed, due to the difficulty in identifying where and when AI has been used, many companies are blind to how, when and where AI is being used throughout the business. According to McKinsey, just 21% of respondents reporting AI adoption say their organisations have established policies governing employees’ use of generative AI technologies in their work. 

These are concerning issues in any business, but within the highly regulated financial sector, the level of risk being incurred is jaw dropping. Taking the derivatives world as an example, some firms have already mooted the use of AI to streamline close out netting for their derivatives contracts yet often the quality of data held within financial institutions is fundamentally inadequate. What will happen if organisations start training generative AI tools on inaccurate data, as a supposed efficiency, while the human skillset to review and use data responsibly is gradually being lost? 

We often hear of the desire to scrap (often off-shored) data extraction exercises from large portfolios of ISDAs, GMRAs, GMSLAs, MRAs, MSFTAs etc. given the challenges legal agreement data and trade linkage continues to cause for resource (across for example capital, liquidity and collateral) optimisation, regulatory compliance/reporting and operational management.  

It is easy to dream of the magic AI bullet, yet a deeper look will show that this is, in fact, a data nightmare.  Any data scientist will tell you of the magic mantra “garbage in means garbage out”. 

AI usage policies and frameworks, dovetailing with mature data governance, are critical to ensure firms do not run blindly into costly AI projects that are doomed to fail. 

Unknown Risks

Of course, organisations recognise there is a problem in the lack of accurate, trusted data required to train newfangled AI tools. But turning instead to synthetic data sources is not a viable solution. Worryingly, there are a number of requests being seen from organisations to create synthetic documents in order to sufficiently “train the AI” and meet minimum training set numbers given to them by AI vendors, and therefore exacerbating the issues of hallucinations, bias and discrimination.  

Not only is the current data resource inadequate, but the immaturity of AI will continue to create unacceptable risk. Drift, for example, is a significant concern. In machine learning, “drift” refers to when large language models (LLMs) behave in unexpected ways that move away from the original parameters. Carefully defined workflows can then suddenly behave unexpectedly and cause significant issues downstream.

One thing is very clear – the pivotal role of the “human-in-the-loop” in any use of AI, is something that needs to be central to AI usage policies.

Financial regulators are likely to take punitive action against any organisation opting to fast-track compliance through the use of AI without the right controls in place. Even if AI legislation is still in its infancy, there are still risks of breaching existing laws around discrimination and competition. There are also emerging AI-specific regulatory concerns, especially within the EU. The draft negotiation mandate for the EU AI Act, recently endorsed by the European Parliament, has been heralded as European lawmakers setting the way for the rest of the work on “responsible AI”. The new act targets high-risk use cases rather than entire sectors, and also proposes penalties of 7% of turnover or €40 million - in excess of existing general data protection regulation (GDPR) fines.

Evolving Risk Perception

While market participants debate the best way to proceed, organisations need to consider the implications of their current laissez-faire approach to AI exploration. The EU has taken a very different stance to the US and UK, compounding the difficulty for even those that seek to carefully embrace AI.

The incident with Samsung employees loading confidential client data into a generative AI tool highlights the implications of the lack of guidelines and training around usage. The security implications associated with hallucinations, jailbreaking or smart prompting are clear, and there have been issues that have prompted a number of high-profile organisations to ban the use of generative AI at work.  There are also huge class action lawsuits under way against companies such as Open AI about the usage of personally identifiable data and whether it goes beyond the principle of fair use.

Why are so many firms failing to balance positive AI innovation with managing the risks? The answer is likely that this is untested ground and without regulation it is all too easy to gallop ahead. AI systems must be constantly and periodically monitored, reviewed and audited. Firms need to create robust AI usage policies but also undertake continual assessment of the potential impact on existing policies, from cyber security to data protection and employment. 

Conclusion

The current attitude of companies and financial institutions to the adoption and use of generative AI is astonishing. How can global financial banks, organisations that are still enduring the fall-out of the Lehman’s failure in 2008, embark on such speculative activity without recognising the extraordinary risk implications? Now is the time for commercial responsibility, wise management oversight, and risk weighted judgement.

Ensuring the safe and controlled use of AI systems is crucial. Contrary to many statements, it is easy to write regulation for AI. It is hard to ensure systems comply. This is why the manner in which we use AI is critical.

  • Export:

Related Articles