Using artificial intelligence for forgery: Fake could be eerily real
Source – internetofthingsagenda.techtarget.com
Artificial Intelligence is rapidly finding application in a myriad of fields, enhancing both the pace and quality of work. The tasks performed by AI are evolving at such a rapid pace that scientists already fear the rise of machines. That might be far-fetched, but AI does bring along some genuine areas of concern. This is primarily because AI has become a powerful tool, which simplifies high-skill tasks.
AI is at the disposal to anyone who wants to perform a task that requires extensive training without any prior experience. Analytics, big dataand machine learning help us to analyze a vast amount of information and use it to predict future outcomes. It can, however, also be used to mislead, forge and deceive.
Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. It is enabling some effective but downright scary new tools for manipulating media. Therefore, it holds power to alter forever how we perceive and consume information. Time will come when people will struggle to know whom and what to trust. Even today, we live in a world of Photoshop, CGI and AI-powered selfie filter apps. The internet democratized the knowledge by enabling free but unregulated and unmonitored access to information. As a result, floodgates of all types of information opened up, ushering in a staggering amount of rumors and lies.
Criminals are utilizing this technology to their benefit. Readily available tools can create high-quality fake videos that can easily fool the general population. Quintessentially, using AI to forge videos is virtually transforming the meaning of evidence and truth in journalism, government communications, testimony in criminal justice and national security.
Lyrebird is a deep learning AI startup based in Montreal that synthesizes surprisingly realistic-sounding speech in anyone’s voice with a one-minute recording.
Creative software giant Adobe has been working on a similar technology “VoCo.” It labeled its project as “Photoshop for audio.” The software requires a 20-minute long audio recording of someone talking. The AI analyzes it, figures out how that person talks and then learns to mimic the speaking style. Just type anything, and the software will read your words in that person’s voice.
Google’s Wavenet is providing similar functionality. It requires a much bigger data set than Lyrebird or VoCo, but it sounds creepily real.
MIT researchers are also working on a model that can generate sound effects for an object being hit in a mute video showing on the screen. The sound is realistically similar to that when the object is hit in real life. The researchers envision the future version automatically producing realistic sound effects good enough for use in movies and television.
With such software, it will become easy to record something controversial in anyone’s voice, rendering voice-based security systems helpless. Telephonic calls could be spoofed. No one will be exactly sure if it is you on the other end of the phone. At the current pace of progress, it may be within two to three years that realistic audio forgeries are good enough to fool the untrained ear, and only five to 10 years before forgeries can fool forensic analysis.
Tom White at Victoria University School of Design created a Twitter bot called “SmileVector” that can make any celebrity smile. It browses the web for pictures of faces and then it morphs their expressions using a deep-learning-powered neural network.
Researchers at Stanford and various other universities are also developing astonishing video capabilities. Using an ordinary webcam, their AI-based software can realistically change the facial expression and speech-related mouth movements of an individual.
Pair this up with any audio-generation software and it’s easy to potentially deceive anyone, even over a video call. Someone can also make a fake video of you doing or saying something controversial.
Jeff Clune, an assistant professor at University of Wyoming, along with his team at Evolving AI Lab is working on image recognition capabilities in reverse, by adopting neural networks trained in object recognition. It allows generating synthetic images based on text description alone. Its neural network is trained on a database of similar pictures. Once it has gone through enough pictures, it can create pictures on command.
A startup called Deep Art uses a technique known as style transfer — which uses neural networks to apply the characteristics of one image to another — to produce realistic paintings. A Russian startup perfected its code developing a mobile app named Prisma, which allows anyone to apply various art styles to pictures on their phones. Facebook also unveiled its version of this technique, adding a couple of new features.
Other work being done on multimedia manipulation using artificial intelligence includes the creation of 3D face models from a single 2D image, changing the facial expressions of someone on video using the Face2Face app in real time and changing the light source and shadows in any picture.
A team of researchers at University College London developed an AI algorithm titled “My Text in Your Handwriting,” which can imitate any handwriting. This algorithm only needs a paragraph’s worth of handwriting to learn and closely replicate a person’s writing.
Luka is also aspiring to create bots that mimic real-life people. It is an AI-powered memorial chatbot. It can learn everything about a person from his/her chat logs, and then allow the person’s friends to chat with the digital identity of that person long after he/she dies. The chatbot, however, can potentially be used before a person dies, thereby stealing the person’s identity.
A study by the Computational Propaganda Research Project at the Oxford Internet Institute found that half of all Twitter accounts regularly commenting about politics in Russia were bots. Secret agents control millions of botnet social media accounts that tweet about politics in order to shape national discourse. Social media bots could even drive coverage of fake news by mainstream media and even influence stock prices.
Imagine those agents and botnets also armed with artificial intelligence technology. Fake tweets and news backed by real-looking HD video, audio, written specimens and government documents is eerily scary. Not only is there the risk of falsehood being used to malign the honest, but the dishonest could misuse it to their defense.
Before the invention of the camera, recreating a scene in the court of law required witnesses and various testimonies. Soon, photographs started assisting along with the witnesses. But later, with the advent of digital photography and rise of Photoshop, photographs were not admissible as reliable evidence. Right now, audio and video recordings are admissible evidence, provided they are of a certain quality and not edited. It’s only a matter of time before the courts refuse audio and video evidence too, howsoever genuine it might seem. The AI-powered tool that imitates handwriting can allow someone to manipulate legal and historical documents or create false evidence to use in court.
Countering the rise of forgery
With potential misuse of artificial intelligence, the times ahead do indeed seem challenging. But, the beautiful thing about technology is that you can always expect to have solutions to any problem.
Blockchain, the technology securing cryptocurrencies, is promising to provide a cybersecurity solution to the internet of things, offering one possibility to counter forgery. With the widespread use of IoT and advancements in embedded systems, it may be possible to design interconnected cameras and microphones that use blockchain technology to create an unimpeachable record of the date of creation of video recordings. Photos can be traced back to the origin by their geotags.
The Art and Artificial Intelligence Laboratory at Rutgers University is currently developing a neural network, which can appreciate and understand art and the subtle differences within a drawing. It uses machine learning algorithms to analyze the images, counting and quantifying different aspects of its observation. This information is processed in its artificial neural network to recognize visual patterns in the artwork.
Similarly, neural networks can be trained to detect forged documents, historical evidence and currency notes. It can also identify fake identities and other bots on the internet by observing their functioning pattern and clashing IP addresses. Forged video and audio could be compared across various dates and platforms to identify their origin.
In addition, regulatory and procedural reforms are required to control this menace.
Even though the audio and video manipulation tools aren’t entirely revolutionary, they no longer require professionals and powerful computers. We can’t stop criminals from getting their hands on such tools. If anything, making these tools available to everyone just to show people what’s happening with AI will make the public aware of the power of artificial intelligence — and hence, aware of the easily forgeable nature of our media.