Voice cloning with artificial intelligence can pose new security threats

4Mar - by aiuniverse - 0 - In Artificial Intelligence

Source: somagnews.com

The new fraud method of cybercriminals is frauds using artificially cloned sounds. Experts say that along with voice cloning, their voices are no longer safe.

The examples presented by Vijay Balasubramaniyan from the cyber security company Pindrop at the RSA Conference show the dimensions of fraud. According to Balasubramaniyan, cyber world fraudsters can reproduce your voice using artificial intelligence-based software if you are a CEO or company manager and you have a lot of content on YouTube, which can put you at risk.

Trump’s voice mimicked
If the trick is well-edited, the attackers can look like a high-level official in a company and send fake emails. Moreover, it is easier to deceive a lower-level employee who hears the voice of his CEO on the phone, and if the instructions given by the voice behind the phone are followed, high amounts of money can be leaked to fraudsters.

Five minutes of recording is sufficient to create a suitable and realistic voice clone, but if there is a recording of five hours or more, this software can mislead people in an unimaginable way. After all, this deep threat of fraud is still small compared to phone frauds that engage in identity theft.

At the same conference, Balasubramaniyan showed an example where his company combined the voices of famous names. To further increase the size of the entertainment, the voice of the US President Donald Trump was also imitated. Using the previous recordings of Trump, the company took less than a minute to clone the voice of the US President. The Donald Trump example showed that voice fraud can also be used to mislead the public.

The good thing is that computer engineers have started developing solutions to distinguish fake sounds. Pindrop has managed to create an artificial intelligence-based algorithm that can distinguish fake voice recordings and real human voice. This software first checks how real people pronounce words, and then matches the recorded voice with human speech structures.

Still, this rising fake audio threat may stop some users from uploading their recordings online in the near future.

Facebook Comments