Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Former Google CEO Eric Schmidt warns against overregulation of AI

Source: venturebeat.com

Former Google CEO Eric Schmidt urged cooperation with Chinese scientists, warned against the threat of misinformation, and advised against overregulation by governments today in a broad-ranging speech about AI ethics and regulation of big tech companies. He also talked about conflict deterrence between nation-states in the age of AI and pondered how secretaries of state might share information in the coming age of artificial general intelligence (AGI).

“What are the norms of this? This area strikes me as one that’s nascent but will become very important as general intelligence becomes more and more possible some time from now,” he said. “We haven’t had a common regime around how all that works.”

In a speech at Stanford University’s Hoover Institution today, he praised progress made in the field of AI in areas like autonomous driving and medicine, federated learning for privacy-preserving on-device machine learning, and eye scans for detection of cardiovascular issues. A combination of generative adversarial networks and reinforcement learning will lead to major advances in science in the years ahead.

He also urged government restraint in regulation of technology as the AI industry continues to grow.

“I would be careful of building any form of additional regulatory structure that’s extralegal,” Schmidt said in response when a member of the audience proposed the creation of a new federal agency to critique algorithms used by private companies.

Schmidt shared the stage with Marietje Schaake, a Stanford Institute for Human-Centered Artificial Intelligence (HAI) fellow and Dutch former member of European Parliament who played a role in passage of GDPR regulation. She counterpointed that companies that say regulation may stifle innovation often assume technology is more important than democracy and the rule of law.

A hands-off approach on tech regulation has led to the creation of new monopolies, thrown journalism into turmoil, and allowed the balkanization of the internet, she said. Failure to act now, she added, could allow for AI to accelerate and amplify discrimination. She suggested systematic impact assessments to operate in parallel with AI research so that our understanding of negative impacts can mirror progress.

“I think it’s very clear that tech companies can all stay on the fence in taking a position in relation to values and rights. I personally believe that a rules-based system serves the public interest as well as collective rights and liberties the companies benefit from,” she said. “I see clear momentum now between the EU and U.S. and a significant part of the democratic world, where [we] can catch up to the civil regulatory gaps platforms and other digital services … anticipating the broader use of artificial intelligence.”

She also argued that big tech self-regulation efforts have failed and emphasized the need for empowering regulators in order to defend democracy.

“Because with great power should come great responsibility, or at least modesty,” she said. “Everyone has a role to play to strengthen the resilience of our democracy.”

Schaake and Schmidt spoke for more than an hour this morning at a symposium held by the Stanford University Institute for Human-Centered AI about AI ethics, policy, and governance.

The debate between the two comes at a time when regulators in the United States have increased scrutiny of tech giants. Companies like Google currently face antitrust investigations from state attorneys general, and Democratic presidential candidate Elizabeth Warren has made the breakup of tech giants a central part of her campaign.

Last month, due to Schmidt’s potential role in issues ranging from Google’s project to enter mainland China to its work with the Department of Defense to its payout to Andy Rubin of $90 million despite sexual harassment allegations, a number of AI ethicists asked HAI to rescind its invitation to this event. Written by Tech Inquiry founder Jack Poulson, signatories include roughly 50 people, about a dozen of whom currently work as engineers at Google.

In response to the petition, HAI published a tweet warning against the dangers of “damaging intellectual blindness.”

Pentagon’s Defense Innovation Board AI ethics recommendations and the report from the national security commission on AI — two committees that Schmidt oversees — are due out October 31 and November 5, respectively.

Both initiatives are aimed at helping the United States create a national AI strategy as roughly 30 other nations around have done, he said. Last week, founders of the Stanford center called for $120 billion in government spending over the course of the next decade as part of a national strategy.

Related Posts

DeepMind open-sources Lab2D to support creation of 2D environments for AI and machine learning

Source: computing.co.uk Alphabet subsidiary DeepMind announced on Monday that it has open-sourced Lab2D, a scalable environment simulator for artificial intelligence (AI) research that facilitates researcher-led experimentation with environment Read More

Read More

A VR Film/Game with AI Characters Can Be Different Every Time You Watch or Play

Source: technologyreview.com The square-faced, three-legged alien shoves and jostles to get at the enormous plant taking over its tiny planet. But each bite just makes the forbidden Read More

Read More

Researchers detail LaND, AI that learns from autonomous vehicle disengagements

Source: venturebeat.com UC Berkeley AI researchers say they’ve created AI for autonomous vehicles driving in unseen, real-world landscapes that outperforms leading methods for delivery robots driving on Read More

Read More

Google Teases Large Scale Reinforcement Learning Infrastructurean

Source: alyticsindiamag.com The current state-of-the-art reinforcement learning techniques require many iterations over many samples from the environment to learn a target task. For instance, the game Dota Read More

Read More

Plan2Explore: Active Model-Building for Self-Supervised Visual Reinforcement Learning

Source: bair.berkeley.edu To operate successfully in unstructured open-world environments, autonomous intelligent agents need to solve many different tasks and learn new tasks quickly. Reinforcement learning has enabled Read More

Read More

Is AI an Existential Threat?

Source: unite.ai When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x