Source – https://www.cmswire.com/

Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.

We need to heighten our awareness around the changes that AI demands in our thinking. If we don’t, AI will trigger embarrassing situations, erode reputations and damage businesses.

Positive and Negative Results From Using AI

Two years ago, Amazon abandoned the AI tool it used to recruit employees. The tool, which the company trained using resumes submitted to the company over a decade, preferred male applicants. Recently, Twitter apologized for deploying an image cropping AI which preferred white faces over black. These are embarrassing (and unforgivable) outcomes of AI, but the ethical implications are clear.  

By contrast, the example of a South Korean national broadcaster, SBS, using AI to render songs in the voice of folk-rock singer Kim Kwang-Seok is delightful but considerably more complex. The popular singer has been dead for 25 years, yet continues to have a large fan following. SBS used 20 songs by Kim Kwang-Seok as a training tool and another 700 Korean folk songs to sharpen the accuracy of the AI. The AI now mimics any song in Kim Kwang-Seok’s style. A song, originally by Kim Bum-soo, rendered in the voice of Kim Kwang-Seok using AI, aired late in January. It was so perfect that it brought tears to the eyes of Kim Kwang-Seok fans. Music executives on the other hand were baffled: Who should the work be attributed to? Who owns the copyright for the work? Who will be paid royalties for the work? Will it be the AI programmer? The producer? For the curious, SBS paid a one-off fee to Kim Kwang-Seok’s family for borrowing his voice in the show. But publishing the song commercially presents perplexing questions. 

Tomorrow’s songs need not necessarily be written by humans either. OpenAI’s text generators, like generative pre-training 3 (GPT-3), could use deep learning/machine learning to write original songs that appear to be penned by Kim Bum-soo or any other song writer. This opens limitless possibility to continue to produce work by an artist long after their death. Could this mean that AI can write and direct “2050: Beyond the Future” to keep alive the cinematic magic created by Arthur C. Clarke and Stanley Kubrick with “2001: A Space Odyssey”?

GPT-3 has the potential to do that. Last June it sent powerful waves across the AI community when Sharif Shameem, the app development head of a startup, used it to construct a program by simply describing a UI in plain English. GPT-3 responded by spitting out JSX code. That code produced a UI matching what Shameem wanted. Shameen said, “I only had to write two samples to give GPT-3 context for what I wanted it to do. It then properly formatted all of the other samples.”

GPT-3 doesn’t only reproduce “stuff” like humans. It is a performer as well. In one instance, it was given code in Python and asked to describe what the code does. The program not only did that, it also offered improvements and suggestions on where to post it after the improvement. GPT-3 can identify paintings from descriptions and recommend books. It can write entire articles for publications. In one instance, GPT-3 managed to express a bunch of popular movies in emoji. The extraordinary part? GPT-3 requires no training. It uses 175 billion parameters (by comparison, the closest anything comes to GPT-3 is Microsoft’s Turing NLG, which uses 17 billion parameters) to generate text that sounds human. You could use it to write your next quarterly report and save some valuable time.

The Danger of Deep Fakes

There are obvious social dangers in deploying AI like this, the most direct being bad training data used by machine learning systems leading to the Amazon recruitment breakdown or the Twitter image cropping fail. But worse lurks around the corner. It is easy to use capabilities of the type used by the Korean broadcaster and those of GPT-3 to produce deep fakes.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence