It’s easy enough to forge a signature for fraudulent purposes. However, until recently, some things—like our voices—have been distinctive and difficult to mimic. Not so in our brave new world.
A new kind of cybercrime that uses artificial intelligence and voice technology is one of the unfortunate developments of postmodernity. You can’t trust what you see, as deep fake videos have shown, or what you hear, it seems. A $243,000 voice fraud case, reported by the Wall Street Journal, proves it.
In March, fraudsters used AI-based software to impersonate a chief executive from the German parent company of an unnamed UK-based energy firm, tricking his underling, the energy CEO, into making an allegedly urgent large monetary transfer by calling him on the phone. The CEO made the requested transfer to a Hungarian supplier and was contacted again with assurances that the transfer was being reimbursed immediately. That too seemed believable.
However, when the reimbursement funds had yet to appear in accounts and a third call came from Austria, with the caller again alleging to be the parent company’s chief executive requesting another urgent transfer, the CEO became suspicious. Despite recognizing what seemed to be his boss’s voice, the CEO declined to make the transfer, realizing something was amiss.
Although the CEO recognized the familiar accent and intonations of the chief executive, it turns out that the boss wasn’t making the call. The funds he transferred to Hungary were subsequently moved to Mexico and other locations and authorities have yet to pinpoint any suspects.
Rüdiger Kirsch, a fraud expert at insurer Euler Hermes, which covered the victim company’s claim, tells the Journal that the insurance company has never previously dealt with claims stemming from losses due to AI-related crimes. He says the police investigation into the affair is over and indicates that hackers used commercial voice-generating software to carry out the attack, noting that he tested one such product and found the reproduced version of his voice sounded real to him.
Certainly, law enforcement authorities and AI topplay experts are aware of voice technology’s burgeoning capabilities, and the high likelihood that AI is poised to be the new frontier for fraud. Last year, Pindrop, a company that creates security software and protocols for call centres, reported a 350% rise in voice fraud between 2013 and 2017, primarily to credit unions, banks, insurers, brokerages, and card issuers.
By pretending to be someone else on the phone, a voice fraudster can access private information that wouldn’t otherwise be available and can be used for nefarious purposes. The ability to feign another’s identity with voice is easier than ever with new audio tools and increased reliance on call centres that offer services (as opposed to going to the bank and talking to a teller face-to-face, say). As the tools to create fakes improve, the chances of criminals using AI-based voice tech to mimic our voices and use them against us are heightened.