Biometric security was once heralded as a foolproof solution for any business. Now, bad actors can easily bypass these measures using deepfake technology. In August 2024, a renowned financial institution in Indonesia contacted cybersecurity experts for help.
Despite employing “robust, multi-layered security measures” and using biometric technology as a second layer of protection, bad actors were able to exploit the company using deepfake technology. They took advantage of the Know Your Customer (KYC) onboarding practice, which used biometric data such as facial recognition and liveness detection.
The attackers went to great lengths to bypass these controls. For example, they would get their hands on a victim’s ID obtained through illegal channels and manipulate the image to bypass biometric verifications. The aim of this pursuit was to defraud the company and apply for loans that they ultimately wouldn’t pay back, presumably under an unassuming victim’s name.
Group IB was asked to investigate the incident, and they found over 1,100 deepfake fraud attempts in which artificially generated photos were used to circumvent their digital KYC process.
This incident is a stark reminder of the nefarious uses of artificial intelligence and deepfake technology, which is not just happening in Indonesia but all over the world. The FBI has said that fraudsters and cybercriminals are increasingly using AI to generate text, images, audio, and videos to amplify their scams.
Furthermore, Francesco Cavalli, co-founder of Sensity.AI, says banks and fintech companies detect up to 1,500 deepfake spoofing attacks every month. It’s true that there are likely thousands of active bank accounts set up using manipulative software enabled by artificial intelligence.
Group IB’s findings amplify the issues surrounding AI and the potential negative consequences that can arise when it’s abused. The cybersecurity group found that deepfake fraud caused significant financial losses in Indonesia, with the amount being over $135 million USD.