INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE
Source – INSTANCES OF ETHICAL DILEMMA IN THE USE OF ARTIFICIAL INTELLIGENCE
With the growing use of artificial intelligence, instances of ethical dilemmas are rising.
‘To be or not to be’- the ethical dilemma is a constant in human life whenever it comes to taking a decision. In the world of technology, artificial intelligence comes closest to human-like attributes. It aims to imitate the automation of human intelligence in times of operation or taking a decision. However, the AI machine can’t take an independent decision and the mentality of the programmer reflects upon the operation of the AI Machine. While driving an autonomous car, in the chance of an accident, the car intelligence might have to decide whom to save first or should a child be saved before an adult. Several ethical challenges that are faced by AI machines are lack of transparency, biased decisions, surveillance practices for data gathering and privacy of court users, and fairness and risk for Human Rights and other fundamental values.
Influences of Human Behavior
While human attention and patience are limited, the emotional energy of a machine is not – rather, a machine’s experience of limitations is technical. Although this could benefit certain fields like customer service, this limitless capacity could create human addiction to robot affection. Using this idea, many apps are using algorithms to nurture addictive behavior. Tinder, for example, is designed to keep users on the A.I.-powered app by instigating less likely matches the longer a user engages in a session.
One of the most pressing and widely-discussed A.I. ethics issues is the training of bias in systems that involve predictive analysis, like hiring or crime. Amazon most famously ran into a hiring bias issue after training an A.I.-powered algorithm to present strong candidates based on historical data. Because previous candidates were chosen through human bias, the algorithm favored men as well. This showcased gender bias in Amazon’s hiring process, which is not ethical. In March, the NYPD disclosed that it developed Patternizer, an algorithmic machine-learning software that shifts through police data to find patterns and connect similar crimes, and has used it since 2016. The software is not used for rape or homicide cases and excludes factors like gender and race when searching for patterns. Although this is a step forward from previous algorithms that were trained on racial bias to predict crime and parole violation, actively removing bias from historical data sets is not standard practice. That means this trained bias is at best an insult and inconvenience; at worst, a risk to personal freedom a and catalyst of systematic oppression.
Making of Fake News
Deep Fakes are quite popular in the usage of AI. It is a technique that uses A.I. to superimpose images, videos, and audio onto others, creating a false impression of original media and audio, most often with malicious intent. Deep fakes can include face swaps, voice imitation, facial re-enactment, lip-syncing, and more. Unlike older photo and video editing techniques, deep fake technology will become progressively more accessible to people without great technical skills. Similar tech was used during the last U.S. presidential election when Russia implemented Reality Hacking (like the influence of fake news on our Facebook feeds). This information warfare is becoming commonplace and exists not only to alter acts but to powerfully change opinions and attitudes. This practice was also used during the Brexit campaign and is increasingly being used as an example of the rising political tensions and confusing global perspectives.
Privacy Concerns of the Consumers
Most consumer devices (from cell phones to blue-tooth enabled light bulbs) use artificial intelligence to collect our tour to provide better, more personalized service. If consensual, and if the data collection is done with transparency, this personalization is an excellent feature. Without consent and transparency, this feature could easily become malignant. Although a phone tracking app is useful after leaving your iPhone in a cab, or losing your keys between the couch cushions, tracking individuals could be un for at a small scale (like domestic abuse survivors seeking privacy) or at a large scale (like government compliance).
These instances answer the question of how artificial intelligence raises the question of ethical dilemmas. It also confirms the fact that AI can only be ethical once its creators and programmers want it to be.