Machine Learning: Long way to go for AI bias-correction; some hurl abuses, others see abuse where there’s none

13Mar - by aiuniverse - 0 - In Machine Learning

Source – https://www.financialexpress.com/

While the need will be to continuously go back to the AI “drawing board”, human control of AI’s learning and other machine-learning will be important to set the context for the machines.

While the focus on checking human biases from getting coded into artificial intelligence (AI) is desirable, there is a need for the developing AI that is “intelligent” about biases and contexts, too. The Indian Express reports that the reason behind YouTube AI banning Agadmator, a popular chess channel on the platform last year, could be the use of “white”, “black” and “attack”—which mean different things in chess and in race-relations.

While more companies are warming up to AI, AI platforms are being taught to screen for specific ‘cue’ words to detect bias or abuse. So, in this case, with the use of the particular words, YouTube AI read racism where there was none. How poorly human understanding is being translated for machines is evident from not just this case, but also from that of Microsoft’s Tay-bot, that all too quickly picked up anti-Semitic and hateful content from the internet when it should have been designed to filter this out contextually.

While the need will be to continuously go back to the AI “drawing board”, human control of AI’s learning and other machine-learning will be important to set the context for the machines.

AI ethics is surely a minefield—business interests, as various analyses of the recent episode at Google involving the termination of two senior ethics experts at the company suggest, could sometimes come into conflict with the larger good. But, as research translates human understanding for machines more effectively, chances are both Tay-bot and Youtube’s reported AI gaffe, at the other extreme, will become rarer.

Facebook Comments