Forensic update

Artificially intelligent, criminally inclined: The rise of AI-driven fraud

insight featured image
As technology continues to advance at a breakneck pace, so too do the methods employed by fraudsters. In most instances our involvement as forensic accountants unfortunately occur after the fraud has been committed, but prevention strategies to combat fraud happening should not be overlooked.

One of the most troubling developments in recent years is the use of artificial intelligence (“AI”) to facilitate and amplify fraudulent activities. From sophisticated phishing schemes to automated deepfakes, AI is being harnessed in increasingly deceptive ways. Some of the real-world examples from recent times provides a glimpse of what the future will likely hold.

In 2020, fraudsters used deepfake technology to mimic the voice of a CEO of a company based in the United Arab Emirates. By creating a highly convincing audio clip, the fraudsters convinced the managing partner of a subsidiary, located in Japan, to transfer $35 million to a fraudulent account, purportedly relating to an acquisition. This incident highlighted the dangers of AI-generated voice and video manipulations, where the line between reality and artificiality becomes dangerously thin.[1]

The use of AI is not limited to generating manipulative voices and videos. From 2013-2015, Evaldas Rimasauskas, a Lithuanian fraudster used AI to automate phishing attacks, tricking Google and Facebook into transferring over $100 million to fraudulent accounts. To do this, Rimasauskas registered and incorporated a company with the same name as Taiwan-based computer hardware manufacturer Quanta Computer. By leveraging AI to create realistic fake invoices and emails, the fraudster was able to bypass many traditional security measures and successfully execute the scam over a two-year period.[2]

These are just two large scale examples of how AI has assisted fraudsters in recent times, perhaps at the earlier stage of AI’s evolution. The problem to now consider is that AI is becoming easier to use and is readily available to the masses. Using AI no longer requires specialised knowledge or tools.

In addition, the use of AI now goes beyond just perpetrating the initial fraud; it also raises concerns about its potential to falsely incriminate individuals as well. For example, we have (in the most part) lived our lives to believe that a photograph represents the truth, by default. Yes, there are cases of digitally manipulating pictures, but these were considered exceptions. The position we now find ourselves in, is whether AI can be used to such a convincing state to not only commit a fraud, but to also be relied on mistakenly as evidence (whether that be pictures, videos, voice recordings, or false documentation).  

Whilst AI Fraud cannot be stopped, it can be prevented through safeguards, such as:

  • Implementing multi-factor authentication and biometric verification to ensure that the person or entity involved in a transaction is genuinely who they claim to be. Implementing verification layers, may have prevented the frauds in the examples above.
  • Training employees and individuals to recognise signs of AI-driven fraud, such as unusual requests or inconsistencies in communications. Awareness campaigns can help people understand the potential risks and develop a more cautious approach when interacting with digital content.
  • The establishment of regulations and standards by Governments and industry bodies for the ethical use of AI. By creating a legal framework that addresses the misuse of AI, we can provide clearer guidelines for both prevention and prosecution.

Looking ahead, the evolution of AI technologies will likely lead to even more sophisticated and elusive forms of fraud. As AI becomes more integrated into everyday life, its potential for misuse will increase.

As a forensic accountant, is clear to see that the future will likely hold greater collaboration with technology experts on AI driven assignments, but for now the role continues to incorporate:

  • Development of anti-fraud strategies;
  • Investigating allegations of fraud, and misappropriation of assets;
  • Evaluating the authenticity of financial records and documents; and
  • Providing expert testimony, to explain complex financial transactions to the Court.

In conclusion, while AI presents remarkable opportunities for innovation and efficiency, its potential for misuse in fraud is a serious concern. By understanding the methods currently being used, implementing effective prevention strategies, and preparing for future challenges, we can better protect ourselves from the evolving threats of AI-driven fraud. The key will be to stay vigilant and proactive, ensuring that as technology advances, so too does our ability to combat its potential for abuse.

[1] Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find (forbes.com)

[2] Google and Facebook Victim of $100 Million in Accounts Payable Fraud: How It Could Have Been Prevented (paymentsjournal.com)