From Barack Obama calling Donald Trump a "complete dipshit", Mark Zuckerberg bragging about having "total control of billions of people's stolen data", to explicit images circulating on X, artificial intelligence (AI) technologies are being used to create fake but convincing materials proliferating online challenging KYC and internal governance controls. Some call the technology "21st century's answer to Photoshopping", using AI to create ever more convincing fakes.
Now deepfakes are posing a serious threat to financial institutions and crypto exchanges. A website called OnlyFake offers cheap services claiming to use AI “neural networks” and “generators” to create fake driver licenses and passports. Some claim they have successfully used the website to bypass Know-Your-Customer (KYC) checks on multiple crypto exchanges.
OnlyFake reportedly generates realistic fake driver’s licenses and passports from 26 countries, including the United States, Canada, Britain, Australia and multiple European Union countries, and takes payment in multiple cryptocurrencies.
404 Media said that it successfully bypassed the KYC verification of a global crypto exchange using a fake photo of a British passport generated with the site, where the ID appeared to be laid on a bedsheet as if a picture of it was taken.
Other media reported stories of the site's users generating fake IDs to onboard leading crypto exchanges and financial services providers.
It is worrying that the site could possibly be used by crypto scammers and hackers as a power tool to fake documents and open exchange or bank accounts, protecting their real identity and making them more difficult to track. This will likely escalate the risks of money laundering, terrorism financing and sanction evasion, which already pose severe challenges to the technological capacity and resources of financial institutions and crypto firms.
It is reported that generating a fake document on OnlyFake takes less than a minute and costs $15 only. Users can upload their own photo or one chosen randomly from a “personal library of drops and not using a neural network”.
Meanwhile, deepfakes could also pose challenges for internal governance and compliance controls with generative AI technologies used to manipulate employee identity to commit fraud. CNN reported this week that a finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.
“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,”
senior superintendent Baron Chan Shun-ching told RTHK.
The case is apparently one of several where criminals have used AI to manipulate publicly available video to perpetuate fraud and scams.
With the ever-escalating challenges of deepfakes and AI technologies, traditional financial institutions and crypto firms alike should ensure that they adopt robust measures, whether internal or third-party controls, to identify and prevent fabricated ID documents. Along with tackling a constant wave of phishing attacks, corporates will also need to scrutinize their internal compliance controls, including payment authorities and procedures, to tackle the increasing risk of deepfakes intended to manipulate employees into parting with money and sensitive corporate information.
Written by J Huang and S Pettigrove
Comments