By Ian Massey and Toby Thomas, 11 December 2023
In recent years, the landscape of artificial intelligence (AI) has witnessed transformative advancements, particularly with the rise of Large Language Models (LLMs). These LLMs, including Open AI’s GPT models, Meta’s LLaMA, and Google’s PaLM, have the potential to revolutionise the way information is processed and generated, offering capabilities that range from content creation to intricate data analysis.
As the digital realm becomes increasingly saturated with information, the importance of due diligence has never been greater. For financial crime compliance officers and professionals, understanding the capabilities and applications of these LLMs in due diligence processes is vital, not only for navigating the complexities of an ever-evolving information ecosystem, but also, potentially, for improving research efficiency.
The evolution of generative AI
The dawn of generative AI marks a significant moment in the realm of research and data analysis.
LLMs, trained on billions of words, can access the vast majority of information available on the surface web. Their technology seeks to emulate human neural networks using a blend of probability and proximity logic to determine word sequences. This approach has culminated in the emergence of cutting-edge generative AI tools capable of producing a diverse array of sophisticated content spanning text, images, audio, and even code. Many of these tools are freely available, making the barrier to entry for basic use surprisingly low.
Prominent LLMs, such as OpenAI's GPT-4, have set new benchmarks in AI capabilities, with claims of achieving ‘human-level performance’ in various academic and professional arenas – including the bar exam to the US legal profession.
Nonetheless, the reach of the current generation of LLMs has its limits. Despite superficially convincing answers, they lack an ability to engage in complex chains of reasoning, leading them to be compared to extremely powerful auto-complete tools. The UK intelligence agency GCHQ co-wrote a recent piece for the Turing Institute, in which LLMs’ potential for incorporation into intelligence reporting was described as akin to that of, ‘an extremely junior analyst: a team member whose work, given proper supervision, has value, but whose products would not be released as finished product without substantial revision and validation’.
Enhanced internet research and dataset comparisons
The potential of generative AI to streamline due diligence is alluring, but these are new technologies with significant risks and limitations. It is imperative to proceed with caution and always integrate tools following a thorough vetting process, which mitigates data security and privacy risks, alongside the aforementioned concern of data integrity.
For due diligence and compliance, the real potential of these tools lies in their ability to create efficiencies in the research and review process, and their advanced analytical and summarisation capabilities. These promise a shift in how research teams face the challenge of sifting through the vast sea of publicly available online information, as AI can make shorter work of this and help teams understand business associations and potential risks quicker. In addition, through prompt engineering, researchers can automate the comparison of multiple datasets. This can streamline the validation process and has the potential to enhance the consistency of fact-checking.
In essence, the integration of generative AI in due diligence procedures offers the potential to expedite and refocus the role of due diligence analysts. Assuming safe adoption, analysts may find themselves conducting fewer mundane tasks, and having more time to redirect their energy towards the higher value tasks of in-depth analysis and critical evaluation of sources, ensuring a more comprehensive and nuanced understanding of the data at hand.
Nonetheless, the world will likely need a new generation of LLMs which better incorporate the logic of the human mind before seeing a revolutionary impact on the compilation of due diligence reports.
Generative AI in action
Generative AI's integration into the due diligence landscape is not just theoretical; its practical applications are already making waves in the broader industry. For instance, secure enterprise search companies like US start-up Hebbia are revolutionising the way private equity sponsors approach merger and acquisition (M&A) data rooms.
By leveraging generative AI, these platforms enable professionals to swiftly extract valuable insights from vast amounts of unstructured data, particularly in the realm of quantitative analysis. However, it's worth noting that the most significant advancements in this domain have predominantly occurred within secure data environments. The decision to input sensitive data into generative AI models, especially those akin to search engines, remains a subject of ongoing debate given the varied security and data protection standards among top providers.
Another example of AI in action is S-RM's own monitoring platform. Developed in recent years, this platform enables clients to continuously monitor and assess risk vectors over time. By combining custom search queries with Natural Language Processing (NLP), the platform can sift through hundreds of thousands of data sources, pinpointing any shifts in a research subject’s risk profile.
However, there is still a human touch, as a team of specialist researchers diligently filter out any false positives, ensuring that the refined output is both accurate and relevant for client escalations. This synergy between AI and human expertise is representative of the future of due diligence, where technology augments, rather than replaces, human judgement.
Augmenting decision-making and risk management
The integration of generative AI into due diligence processes can also significantly enhance decision-making and risk management, by enabling rapid and consistent analysis of vast data volumes. This efficiency, however, must be weighed against potential risks and challenges.
While generative AI can accelerate research, it's crucial to address concerns related to data security, privacy, and integrity. As due diligence often involves handling sensitive data, transferring this information to third-party AI tools for processing can pose significant risks, especially when considering stringent data privacy laws like the General Data Protection Regulation (GDPR), which restricts the transfer of specific data types, such as personally identifiable information (PII). Without detailed knowledge of where and how third-party tools process this data, there's potential to fall foul of these regulations.
Another challenge lies in the integrity and accuracy of the information AI tools provide. In a new era where disinformation can be easily crafted using advanced AI tools, the challenges for due diligence professionals may seem daunting. Instances where companies or individuals find themselves ensnared in deceptive 'dark PR' campaigns, or where AI-generated content blurs the lines between fact and fiction, highlight the pressing need for discerning human judgement.
There are innocent instances of misinformation, too. LLMs, while advanced, are predictive by nature. This means they frequently produce outputs known as ‘hallucinations’ that, while appearing plausible, are factually incorrect or contain false references. Such inaccuracies can be detrimental in a due diligence context, where precision and trustworthiness are paramount. It's crucial to remember that while AI can aid the process, human oversight remains essential to ensure the accuracy and reliability of the intelligence provided.
Lastly, the world of AI is one of rapid evolution, leading to a constantly changing – and hotly debated – regulatory landscape. As AI technologies continue to advance, legal frameworks are struggling to keep pace. The regulatory landscape is not yet equipped to moderate the use of AI in particular fields, so legal teams must remain vigilant, ensuring that due diligence processes are always compliant with the latest regulations. In a broader sense, businesses should ensure that the technology's application aligns with their values and is monitored with human oversight. This will mean AI can aid decision-making, without compromising the integrity or security of the process.
Generative AI in the realm of due diligence has the potential to play a significant role in shaping the industry's future. However, this comes with a caveat: the need for a measured and vigilant approach. While the potential benefits of AI, such as enhanced research efficiency and analytical capabilities, are attractive, it's imperative to remain acutely aware of the associated risks and limitations.
The evolving landscape of AI not only amplifies the intricacies of due diligence tasks, but also underscores the indispensable value of professional expertise. As technology continues to advance at a breakneck pace, the onus falls on tech firms, governments, and regulators to ensure that legal frameworks evolve in tandem.
While AI might reshape facets of the compliance industry, it is unlikely to eclipse the proven value of human intuition and judgement in navigating intricate risk decisions.
Ian Massey is Head of Corporate Intelligence, EMEA, at S-RM.
Toby Thomas is Director of Research at S-RM.