<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google’s Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/googles/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/googles/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Thu, 15 Jul 2021 10:04:12 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>HOW GOOGLE’S AI FUNDAMENTALS &#038; APPLICATIONS FOCUSES ON RESEARCH</title>
		<link>https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/</link>
					<comments>https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 15 Jul 2021 10:04:10 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[FOCUSES]]></category>
		<category><![CDATA[FUNDAMENTALS]]></category>
		<category><![CDATA[Google’s]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14994</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Google’s AI creates solutions to fundamental computational problems Google’s AI team, works on exploring solutions to computational problems, in theory, algorithms, journalism, machine learning, speech, <a class="read-more-link" href="https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/">HOW GOOGLE’S AI FUNDAMENTALS &#038; APPLICATIONS FOCUSES ON RESEARCH</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">Google’s AI creates solutions to fundamental computational problems</h2>



<p>Google’s AI team, works on exploring solutions to computational problems, in theory, algorithms, journalism, machine learning, speech, and other data-driven streams with an impact on Google’s products and scientific progress. It focuses on two tools, software libraries to vehicle the research findings to products and services, and publications to make the work known to the community.  Let’s take a look at Google’s AI applications.</p>



<p>Most of the real-world Graph-based learning applications include varied information on relationships between data items. The team’s main aim is to extend Machine learning (ML) approaches to better model the relationships. These are used in many Google products.</p>



<p>Google, has a long history of the building and applying Machine Learning techniques since it has previously developed a Core Google API for supervised machine learning. Recently it has also been into researching and developing tools for the TensorFlow ecosystem. Google’s AI team actively collaborates with other products of Google such as Docs, Search, Ads to deploy ML-based solutions for cutting-edge research.</p>



<p>It also includes supervised learning and semi/unsupervised learning. Its areas of focus are personalization, optimization, data-dependent hashing, privacy learning, and many more. Google AI team has developed principled approaches and has been successful in applying them to Google’s products powering Search and Display Ads, YouTube, and Google Shopping.</p>



<p>The online clustering team provides clustering of the datasets that can extend to billions of data points lining the output of thousands of points per second. The goal behind this is to provide scalable nonparametric clustering without assumptions. The team came up with design techniques to handle data information drifts.</p>



<p>Another interesting sector of research is cross-lingual cross-model access for dynamically organized information for making writing, watching, and reading an immersive experience. The team’s Co-author powers the web content in Google Docs and the team is yet to come up with other new applications as well.</p>



<p>Google’s AI team filters through data to discover, understand and model indirect user behaviors. For this it partners with products like Ads, YouTube, many are yet to get added soon. Since structured data is vital for every Google product such as Fact Check, Search, and Q&amp;A. It uses a wide range of techniques including machine learning, data mining for information retrieval and extraction. The team also develops techniques for fast inferences in ML models improving the speed over 50x along with accurate solutions.</p>



<p>It devises automata, grammars, and other models for speech and keyboard, written-to-spoken transductions, and extractions. These can be merged and optimized to give high accuracy, efficient speech recognition, text normalization, and more. Sensitive content detection helps to create a comprehensive set of classifiers for detecting any kind of offensive content, images, or videos. Google’s AI team has accomplished this using a variety of techniques such as ML models which are trained on images, and text from the web.</p>



<p>Many teams within Google AI have developed algorithms and Machine Learning systems for knowing user preferences through personalized and targeted experiences.   Google’s AI develops systems for transforming cloud-resident ML models that run on resource-constrained mobile devices. Not only this it also enriches electronic conversations by understanding media using multi-modal signals from images, video, text, and web.</p>



<p>Glassbox Learning does Research and Development into making Machine Learning more interpretable without compromising on accuracy. It also provides end-to-end guarantees on the relationship of inputs to outputs. The team has AdaNets that adaptively learns both the structure of the network and its weight. These are based on deep boosting with solid theoretical analysis including data-dependent generalization guarantees.   Google’s AI is doing an amazing job towards research with a varied set of tools and applications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/">HOW GOOGLE’S AI FUNDAMENTALS &#038; APPLICATIONS FOCUSES ON RESEARCH</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-googles-ai-fundamentals-applications-focuses-on-research/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>All You Need To Know About Google’s Visual Inspection AI</title>
		<link>https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/</link>
					<comments>https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 28 Jun 2021 09:04:18 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Google’s]]></category>
		<category><![CDATA[Inspection]]></category>
		<category><![CDATA[Need]]></category>
		<category><![CDATA[Visual]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14611</guid>

					<description><![CDATA[<p>Source &#8211; https://analyticsindiamag.com/ In 2019, Google Cloud identified six sectors as vital components of its growth: public, healthcare, financial services, retail, media, and manufacturing. Within manufacturing, the cost of quality <a class="read-more-link" href="https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/">All You Need To Know About Google’s Visual Inspection AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://analyticsindiamag.com/</p>



<p>In 2019, Google Cloud identified six sectors as vital components of its growth: public, healthcare, financial services, retail, media, and manufacturing. Within manufacturing, the cost of quality control and inspection continues to be among the highest. The American Society for Quality estimates that the price of quality may be as high as 15 to 20 percent of annual sales revenues for many organisations. For larger manufacturers, this translates into billions of dollars every year. Additionally, the rapid increase in production volumes makes it difficult for humans to manually inspect defects in computer chips and other products. To combat this, Google Cloud has recently announced an approach, backed by artificial intelligence (AI), for visual inspection. </p>



<p>The newly launched Visual Inspection AI is a purpose-built tool to help manufacturers and related workers and businesses to inspect and reduce product defects and decrease quality control costs. Powered by Google Cloud Platform’s computer vision technology, Visual Inspection AI goes beyond the traditional methods of supporting manufacturing quality control through its general-purpose AI product, AutoML.&nbsp;</p>



<p>According to Kevin Prouty, Group Vice President of Energy and Manufacturing at IDC, “Google Cloud’s approach to visual inspection is the roadmap most manufacturing companies are looking for.”</p>



<p>Visual Inspection AI aims to automate quality assurance workflows, thus allowing companies to identify and correct defects before shipping products. Through this, the new AI tool automates visual inspection using a set of AI and computer vision to improve production by increasing yields, reducing re-work, and cutting back on return-and-repair costs.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading" id="h-previous-methods"><strong>Previous methods</strong></h3>



<p>COVID-19 has increasingly driven manufacturers to adopt AI into their production processes. According to a Google Cloud survey, 76 percent of executives say they have embraced digital enablers such as AI, data analytics and cloud computing. Additionally, 66 percent of manufacturers who use AI in their daily operations have stated that their reliance on the technology is increasing.</p>



<p>With this advancement, traditional methods to quality control inspections fall short. Traditionally, manufacturers include one or more steps to inspect products for defects visually. The visual inspection process is typically highly manual, making it vulnerable to human error and highly time-consuming. Moreover, traditional machinery used in machines are not flexible enough to adapt to product changes and can only detect a handful of defects at any time.</p>



<p>Artificial intelligence then is an agent that manufacturers are hopeful will bring in a more significant wave of innovation. Google Cloud listed multiple benefits of utilising AI, ranging from the reduced cognitive load for operators, fewer missed defects, no programming required (making it more flexible than previous machines), and the ability to detect hundreds of areas of interest on a product in seconds. </p>



<h3 class="wp-block-heading" id="h-google-s-new-solution"><strong>Google’s new solution</strong></h3>



<p>As per Kyocera Communications Systems, a major manufacturer of mobile phones for wireless service providers, Visual Inspection AI is an innovative service that non-AI engineers can use. Google Cloud says that its new Visual Inspection AI meets the needs of quality, testing, manufacturing, and process engineers who might not be well-versed in AI despite being experts in their respective fields. Thus, the new tool paves the way to many substantial benefits compared to general-purpose machine learning (ML) models, such as superior computer vision technology, shorter time-to-value and high scalability. Through this, customers can deploy solutions within weeks, and an interactive user interface guides them through the steps.&nbsp;</p>



<p>Visual Inspection AI has also improved accuracy by up to 10 times from general ML approaches. Finally, Visual Inspection AI deep goes beyond simple anomaly detection. Instead, it allows customers to train models that detect, classify and locate multiple defect types in a single image—doing so provides follow-up tasks on production lines to be automated. </p>



<p>There are multitudes of ways in which businesses can use Google Cloud’s Visual Inspection AI in manufacturing. Automotive manufacturing, for one, can use it for paint shop surface inspection or press shop inspection—to look for scratches, dents, cracks or staining. On the other hand, electronics manufacturing could employ the tool for defects in printed circuit board components, and general-purpose manufacturing could improve upon procedures like packaging and label inspection, fabric inspection, metal welding seam inspections—to name a few.&nbsp;</p>



<p>As per the above mentioned Google Cloud survey on manufacturing trends, the most common roadblock to AI integration is the lack of talent to leverage AI properly. Given this, Google Cloud’s new Visual Inspection AI appears as a brilliant step towards the proper deployment of artificial intelligence in the manufacturing industry.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/">All You Need To Know About Google’s Visual Inspection AI</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/all-you-need-to-know-about-googles-visual-inspection-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s AI advertising revolution: More privacy, but problems remain</title>
		<link>https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/</link>
					<comments>https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 16 Mar 2021 07:20:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ADVERTISING]]></category>
		<category><![CDATA[Google’s]]></category>
		<category><![CDATA[privacy]]></category>
		<category><![CDATA[problems]]></category>
		<category><![CDATA[revolution]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13533</guid>

					<description><![CDATA[<p>Source &#8211; https://theconversation.com/ In March 2021, Google announced that it was ending support for third-party cookies, and moving to “a more privacy first web.” Even though the <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/">Google’s AI advertising revolution: More privacy, but problems remain</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://theconversation.com/</p>



<p>In March 2021, Google announced that it was ending support for third-party cookies, and moving to “a more privacy first web.” Even though the move was expected within the industry and by academics, there is still confusion about the new model, and cynicism about whether it truly constitutes the kind of revolution in online privacy that Google claims.</p>



<p>To assess this, we need to understand this new model and what is changing. The current advertising technology (adtech) approach is one in which platform corporations give us a “free” service in exchange for our data. The data is collected via third-party cookies downloaded to our devices, that allow a browser to record our internet activity. This is used to create profiles and predict our susceptibility to specific ad campaigns.</p>



<p>Recent advances have allowed digital advertisers to use deep learning, a form of artificial intelligence (AI) wherein humans do not set the parameters. Although more powerful, this is still consistent with the old model, relying on collecting and storing our data to train models and make predictions. Google’s plans go further still.</p>



<h2 class="wp-block-heading">Patents and plans</h2>



<p>All corporations have their secret sauce, and Google is more secretive than most. However, patents can reveal some of what they’re up to. After an exploration of Google patents, we found U.S. patent US10885549B1, “Targeted advertising using temporal analysis of user-specific data”: a patent for a system that predicts the effectiveness of ads based on a user’s “temporal data,” snapshots of what a user is doing at a specific point instead of indiscriminate mass data collection over a longer time period.</p>



<p>We can also make inferences by examining work from other organizations. Research funded by adtech company Bidtellect demonstrated that long-term historical user data is not necessary to generate accurate predictions. They used deep learning to model users’ interests from temporal data.</p>



<p>Alongside contextual advertising — which displays ads based on the content of the website on which they appear — this could lead to more privacy-conscious advertising. And without storing personally identifiable information, this approach would be compliant with progressive laws like the European Union’s General Data Protection Regulation (GDPR).</p>



<p>Google has also released some information through the Google Privacy Sandbox (GPS), a set of public proposals to restructure adtech. At its core are Federated Learning Cohorts (FLoCs), a decentralized AI system deployed by the latest browsers. As the Google AI blog explains, federated learning differs from traditional machine learning techniques that collect and process data centrally. Instead, a deep learning model is downloaded temporarily onto a device, where it trains on our data, before returning to the server as an updated model to be combined with others.</p>



<p>With FLoCs, the deep learning model will be downloaded to Google Chrome browsers, and analyze local browser data. It then sorts the user into a “cohort,” a group of a few thousand users sharing a set of traits identified by the model. It makes an encrypted copy of itself, deletes the original and sends the encrypted copy back to Google, leaving behind only a cohort number. Since each cohort contains thousands of users, Google maintains that the individual becomes virtually unidentifiable.</p>



<h2 class="wp-block-heading">Cohorts and concerns</h2>



<p>In this new model, advertisers don’t select individual characteristics to target, but instead advertise to a given cohort, as Google’s Github page explains. Although FLoCs may sound less effective than collecting our individual data, Google claims they realize “95 per cent of the conversions per dollar spent when compared with cookie-based advertising.”</p>



<p>The bidding process for ads will also take place on the browser, using another system codenamed “Turtledove.” Soon, Google adtech will all work this way, contained on a web browser, making constant ad predictions based on our most recent actions, without collecting or storing personally identifiable information.</p>



<p>We see three key concerns. First, this is only part of a much larger AI picture Google is building across the internet. Through Google Analytics, for example, Google continues to use data gained from individual website-based first-person cookies to train machine learning models and potentially build individual profiles.</p>



<p>Secondly, does it matter how an organization comes to “know” us? Or is it the fact that it knows? Google is giving us back legally acceptable individual data privacy, however it is intensifying its ability to know us and commodify our online activity. Is privacy the right to control our individual data, or for the essence of ourselves to remain unknown without consent?</p>



<p>The final issue concerns AI. The limitations, biases and injustice around AI are now a matter of widespread debate. We need to understand how deep learning tools in FLoCs group us into cohorts, attribute qualities to cohorts and what those qualities represent. Otherwise, like every previous marketing system, FLoCs could further entrench socio-economic inequalities and divisions.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/">Google’s AI advertising revolution: More privacy, but problems remain</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-ai-advertising-revolution-more-privacy-but-problems-remain/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google’s deep learning finds a critical path in AI chips</title>
		<link>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/</link>
					<comments>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 06:53:02 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[CHIPS]]></category>
		<category><![CDATA[Critical]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Google’s]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13136</guid>

					<description><![CDATA[<p>Source &#8211; https://www.zdnet.com/ The work marks a beginning in using machine learning techniques to optimize the architecture of chips. This month, Google unveiled to the world one <a class="read-more-link" href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Google’s deep learning finds a critical path in AI chips</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.zdnet.com/</p>



<p>The work marks a beginning in using machine learning techniques to optimize the architecture of chips.</p>



<p>This month, Google unveiled to the world one of those research projects, called Apollo, in a paper posted on the arXiv file server, &#8220;Apollo: Transferable Architecture Exploration,&#8221; and a companion blog post by lead author Amir Yazdanbakhsh. </p>



<p>Apollo represents an intriguing development that moves past what Dean hinted at in his formal address a year ago at the International Solid State Circuits Conference, and in his remarks to&nbsp;<em>ZDNet</em>.</p>



<p>In the example Dean gave at the time, machine learning could be used for some low-level design decisions, known as &#8220;place and route.&#8221; In place and route, chip designers use software to determine the layout of the circuits that form the chip&#8217;s operations, analogous to designing the floor plan of a building.</p>



<p>In Apollo, by contrast, rather than a floor plan, the program is performing what Yazdanbakhsh and colleagues call &#8220;architecture exploration.&#8221;&nbsp;</p>



<p>The architecture for a chip is the design of the functional elements of a chip, how they interact, and how software programmers should gain access to those functional elements.&nbsp;</p>



<p>For example, a classic Intel x86 processor has a certain amount of on-chip memory, a dedicated arithmetic-logic unit, and a number of registers, among other things. The way those parts are put together gives the so-called Intel architecture its meaning.</p>



<p>Asked about Dean&#8217;s description, Yazdanbakhsh told&nbsp;<em>ZDNet</em>&nbsp;in email, &#8220;I would see our work and place-and-route project orthogonal and complementary.</p>



<p>&#8220;Architecture exploration is much higher-level than place-and-route in the computing stack,&#8221; explained Yazdanbakhsh, referring to a presentation by Cornell University&#8217;s Christopher Batten. </p>



<p>&#8220;I believe it [architecture exploration] is where a higher margin for performance improvement exists,&#8221; said Yazdanbakhsh.</p>



<p>Yazdanbakhsh and colleagues call Apollo the &#8220;first transferable architecture exploration infrastructure,&#8221; the first program that gets better at exploring possible chip architectures the more it works on different chips, thus transferring what is learned to each new task.</p>



<p>The chips that Yazdanbakhsh and the team are developing are themselves chips for AI, known as accelerators. This is the same class of chips as the Nvidia A100 &#8220;Ampere&#8221; GPUs, the Cerebras Systems WSE chip, and many other startup parts currently hitting the market. Hence, a nice symmetry, using AI to design chips to run AI.</p>



<p>Given that the task is to design an AI chip, the architectures that the Apollo program is exploring are architectures suited to running neural networks. And that means lots of linear algebra, lots of simple mathematical units that perform matrix multiplications and sum the results.</p>



<p>The team define the challenge as one of finding the right mix of those math blocks to suit a given AI task. They chose a fairly simple AI task, a convolutional neural network called MobileNet, which is a resource-efficient network designed in 2017 by Andrew G. Howard and colleagues at Google. In addition, they tested workloads using several internally-designed networks for tasks such as object detection and semantic segmentation.&nbsp;</p>



<p>In this way, the goal becomes,&nbsp;<em>What are the right parameters for the architecture of a chip such that for a given neural network task, the chip meets certain criteria such as speed?</em></p>



<p>The search involved sorting through over 452 million parameters, including how many of the math units, called processor elements, would be used, and how much parameter memory and activation memory would be optimal for a given model.&nbsp;</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/">Google’s deep learning finds a critical path in AI chips</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/googles-deep-learning-finds-a-critical-path-in-ai-chips/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
