IS AI FOR THE LGBTQ COMMUNITY DANEGEORUS AND FLAWED?

Source – https://www.analyticsinsight.net/

Can Artificial Intelligence benefit the LGBTQ community?

How can we say whether somebody is gay? A Stanford University study has guaranteed that Artificial Intelligence (AI) utilizing a facial recognition algorithm can all the more precisely surmise whether an individual is gay or lesbian than people can.

This study from Stanford University, first announced in the Economist, has raised a controversy after asserting artificial intelligence technology can conclude whether individuals are gay or straight by analysing pictures of a gay individual and a straight individual side by side.

The study has proved questionable not on account of our obvious mediocrity in the computer algorithms, but due to its questionable methodology, in addition to other things, its select spotlight on white subjects and its exclusion of bisexual, transgender, and intersex participants. It additionally features the risks of AI to the outing of sexual minorities without wanting to, presenting people to conceivable discrimination.

LGBTQ community and advocacy groups have pummeled the report as “junk science” and called it “dangerous and flawed” due to an unmistakable absence of representation, racial bias and decreasing the sexuality range to a binary.

What’s more, for the LGBTQ community, frequently underestimated by conventional systems, we should be careful about how artificial intelligence technology could sift through them. Since, supposing that LGBTQ people don’t, it could recount their story mistakenly, and abandon them, as the technology expands

In addition to the clear fact that characterizing queer is impossible, artificial intelligence presently depends on commercial data stacks. Data, which sits in a framework that doesn’t generally see our sexuality or a more fluid gender binary

Furthermore, where data goes far enough to attempt to comprehend the LGBTQ community when driven by business imperatives, it will paint a specific – generally gay, white, male – picture

This is on the grounds that algorithms and systems hope to normalize LGBTQ members to improve and streamline for all users in a one size fits all methodology. This, along with the fact that we as a whole have experiences, we convey into the work we do and the systems we assemble.

Well, nobody wants to credit the bad motives of the Stanford study. They, when all is said and done, warn against some possible homophobic employments of their study. In any case, it’s imperative to zero in on possible homophobic uses. Which state institutions, organizations or gatherings are probably going to utilize such a study and to what ends? Eventually, ethical concerns expect us to find out if such studies and advancements further or sabotage LGBTQ human rights.

Predictig somebody’s sexuality may sound harmless, however, in places that condemn or police homosexuality and gender non-conformity, the outcomes of prediction can be hazardous.

Michal Kosinski, the co-author of the study and an assistant professor at Stanford, told the Guardian that he was shocked by the criticisms, contending that machine learning and artificial technology already exists and that a main thrust behind the study was to uncover conceivably hazardous applications of artificial intelligence and push for privacy rights and regulations.

We can’t disregard the outcomes of algorithms that foresee somebody’s sexuality. In a world where policies punish individuals based on their real or perceived sexual identity, overlooking such things can have huge negative outcomes

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence