Opinion almost equally divided on new rules for ‘high-risk’ artificial intelligence
New legislation is needed for “high-risk” applications of artificial intelligence (AI), in the view of a slim majority of companies that responded to a recent European Commission consultation.
The commission on Friday published an overview of the 1,215 responses it received on possible AI legislation, which show a wide mix of opinions on how Europe should regulate the technology.
The dilemma for politicians is deciding whether to draw up brand new rules or modify existing legislation.
Forty two per cent of respondents backed the introduction of a new regulatory framework on AI, while another 33 per cent said it could be dealt with by changing existing legislation. Only 3 per cent say current legislation is sufficient.
“It is interesting to note that respondents from industry and businesswere more likely to agree with limiting new compulsory requirements to high-risk applications [by] a percentage of 54.6 per cent,” the commission noted. High-risk applications could include self-driving cars, facial recognition technology and AI used in healthcare.
The consultation, which ran until June, followed publication of an EU AI white paper in February, spelling out options for AI laws. The paper likened the current situation to “the Wild West”, with AI applications like facial recognition technology coming into use without proper oversight.
Companies and researchers are aware the technology they are developing comes with risks.
The overwhelming majority of responses acknowledged the possibility of AI “breaching fundamental rights” and that use of AI may lead to discriminatory outcomes. “Ninety per cent and 87 per cent of respondents [respectively] find these concerns important or very important,” the commission said.
Contributions arrived from all over the world, including the EU’s 27 member states and India, China, Japan, Syria, Iraq, Brazil, Mexico, Canada, the US and the UK.