Using AI to counter fraudsters who use it too
By Staff Writer 27 September 2024 | Categories: feature articlesAs the world of financial crime continues to evolve, the emergence of artificial intelligence (AI) has become both a tool for fighting fraud and a weapon for those perpetrating it.
Speaking at the recent 17th Annual ACFE Africa Conference and Exhibition, Stephanie Ora, Global Lead for Financial Crimes Analytics at SAS Institute, discussed how companies can use AI to counter the fraudsters who have adopted the technology as well.
"AI has revolutionised the fight against financial crime," says Ora. "But as we enhance our capabilities, so do fraudsters. They are leveraging AI to execute sophisticated schemes that are often difficult to detect using traditional methods. This double-edged nature of AI means that while it can help us identify and prevent fraud more effectively, it also presents new challenges that we must overcome."
The fight against fraud has evolved beyond rules-based systems centred on manual checks and pattern recognition. Today’s AI-driven methods include automated anomaly detection, predictive analytics and behavioural monitoring in real-time. However, the use of Generative AI (GenAI) in fraud schemes poses a particular challenge as it is not just about detecting fraud but understanding and replicating data behaviour.
Fraudsters can use GenAI, such as deepfake, to evolve their fraud schemes from manipulated to completely manufactured synthetic identities. It has also expanded its scam channels from emails and text messages to both audio and video calls. This has increased the amount of authorised push payments and account takeovers facilitated by account owners themselves, not realising the dark side of GenAI in fraud.
“Even though AI’s ability to detect anomalies and patterns in real-time is a game-changer, AI technologies and fraud schemes evolve at an accelerated rate," says Ora. “This is why it is critical for financial institutions to have scalable AI solutions allowing for agility and adaptability to emerging fraud threats. This should be complemented by enterprise-wide fraud awareness programs and collaborations with other financial institutions, regulatory bodies and law enforcement to build robust fraud defences and AI governance frameworks."
“To fight AI-enabled fraud, a hybrid approach of combining rules-based and AI-based approach is key in achieving balance between effectiveness and explainability. Third party data, such as device, IP address, behavioural biometrics and watchlists also play an important role to make data-driven predictions early and potentially identify networks and hidden relationships," says Ora.
There are also ethical implications to consider for using AI in financial crime prevention.
“As we integrate AI into our operations, we must ensure that it does not compromise privacy, transparency and fairness of financial systems. Establishing ethical guidelines, clear accountability and oversight is crucial to ensure smooth execution whilst not causing unintended harm,” says Ora.
The most significant issue is that AI will not fix itself.
“Despite the risks and ethical considerations of AI, it is even more important to start using AI now since AI is continuously learning from those who use it. In the fight against financial crime, it is crucial to enforce the human good in the AI loop since fraudsters have started to abuse it,” concludes Ora.
As AI presents an incredible opportunity to enhance operational efficiency, improve customer experiences and mitigate risk, it also challenges organisations to rethink their approach to fraud prevention and adapt to a new landscape where the lines between man and machine are increasingly blurred.
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?