<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>putting Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/putting/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/putting/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 16 Jul 2019 09:51:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Here&#8217;s how Google is putting AI to work in healthcare, environmental conservation, agriculture and more</title>
		<link>https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/</link>
					<comments>https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 16 Jul 2019 09:51:05 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Agriculture]]></category>
		<category><![CDATA[conservation]]></category>
		<category><![CDATA[environmental]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Healthcare]]></category>
		<category><![CDATA[putting]]></category>
		<category><![CDATA[work]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4043</guid>

					<description><![CDATA[<p>Source:digit.in Earlier this year, Microsoft had invited us to its Bengaluru campus for a two-day briefing on how it&#8217;s incorporating artificial intelligence (AI) in many of its <a class="read-more-link" href="https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/">Here&#8217;s how Google is putting AI to work in healthcare, environmental conservation, agriculture and more</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:digit.in</p>



<p> Earlier this year, Microsoft had invited us  to its Bengaluru campus for a two-day briefing on how it&#8217;s  incorporating artificial intelligence (AI) in many of its business  solutions, including Azure, Power BI, Teams, and Office 365. In addition  to letting a few of its business partners explain how these AI-enabled  services help them, the Redmond-based software giant had demonstrated  its Garage-developed apps such as Kaizala, Seeing AI, and Soundscape. </p>



<p>In a style quite similar to Microsoft&#8217;s, Google invited us to its  Roppongi Hills office in Tokyo for a one-day briefing titled, “Solve…  with AI” earlier this week. The briefing was headed by Jeff Dean, a  Senior Fellow and AI Lead at Google. While Microsoft&#8217;s briefing on AI  mostly revolved around solutions that tackle IT business challenges,  Google&#8217;s briefing addressed solutions aimed towards the “social good”.  Product leads from Google AI explained how the company&#8217;s technology is  being put to use in areas like healthcare, environmental conservation,  agriculture, and others. Google invited a few of its business partners  to add inputs and examples during the briefing. </p>



<h4 class="wp-block-heading"><strong>Introduction</strong></h4>



<p>The briefing began with Dean delivering the keynote address in which  he explained the basics of machine learning (ML), which is a subset of  AI that involves training a computer to recognise patterns by example,  rather than programming it with specific rules. He explained how neural  networks can be trained to identify patterns that are either too vast or  complex for humans with the use of relatively simple mathematical  functions. ML models are developed for this purpose.</p>



<p>Apart from employing them in its own products, Google offers ML tools
 along with some reference implementation information to researchers and
 developers to build AI-enabled software. Examples of such tools include
 the open-source TensorFlow software library, CloudML platform, Cloud 
Vision API, Cloud Translate API, Cloud Speech API, and Cloud Natural 
Language API. Google incorporates ML models in its offerings, including 
Search, Photos, Translate, Gmail, YouTube, Chrome, etc.

</p>



<p>Dean used the example of an air quality monitoring tool called  Air Cognizer to demonstrate how TensorFlow is used in everyday mobile  app development. Air Cognizer is an app developed in India as part of  Celestini Project India 2018. It can help detect the air quality level  of the surrounding area by scanning a picture taken through the Android  device’s camera. Dean went on to say that that was only one such example  of developers and researchers using Google’s machine learning tools to  create AI-enabled apps and services. After Dean’s introduction, other  Google AI team leaders took the stage one by one to talk about other  areas in which Google’s ML efforts are making a difference.</p>



<h4 class="wp-block-heading"><strong>Healthcare</strong></h4>



<p>Lily Peng, Product Manager for Google Health, came on stage after  Dean&#8217;s introduction to talk about how Google&#8217;s AI ventures help in the  field of healthcare. “We believe that technology can have a big impact  in medicine, helping democratize access to care, returning attention to  patients and helping researchers make scientific discoveries,” she said  during her presentation. She supported her statement by citing three  specific areas in which Google&#8217;s ML models are seeing success: lung  cancer screening, breast cancer metastases detection, and diabetic eye  disease detection.</p>



<p>Google&#8217;s ML model can, according to the company, analyse CT scans and
 predict lung malignancies in cancer screening tests. In the tests 
conducted by Google, the company&#8217;s model detected 5 percent more cancer 
cases, thereby reducing false positives by over 11 percent compared to 
radiologists. According to Google, early diagnosis can go a long way in 
treating the deadly disease but over 80 percent of lung cancers are not 
caught early.

</p>



<p>In breast cancer metastases detection, Google says its ML model  can find 95 percent of cancer lesions in pathology images. Google claims  that pathologists can generally only detect 73 percent of cancer  lesions. Its model can scan medical slides better, which are each up to  10 GigaPixels in size. Google says it&#8217;s also more successful in  detecting false positives than doctors. Google says that it has found  that the combination of pathologists and AI was more accurate than  either alone.</p>



<p> Google says that, with the help of its sister company Verily,  it&#8217;s becoming increasingly more successful in treating diabetic  retinopathy. The company is currently piloting the use of its ML model  for detection of cases of diabetic retinopathy in India and Thailand.  Google believes that there&#8217;s a shortage of doctors and special equipment  in many places, which is one of the reasons the disease isn&#8217;t caught  early, leading to lifelong blindness amongst patients. </p>



<h4 class="wp-block-heading"><strong>Environmental conservation</strong></h4>



<p>Julie Cattiau, a Product Manager at Google AI, explained how wildlife  on the planet has decreased by 58 percent in the past half a century.  According to her, Google&#8217;s AI technology is currently helping  conservationists track the sound of humpback whales, an at-risk marine  species, in order to prevent losing them altogether to extinction. In one bioacoustics project,  Google has apparently partnered with NOAA (National Oceanic and  Atmospheric Administration), which has collected over 19 years worth of  underwater audio data so far. </p>



<p>Google says that it was able to train its neural network (or 
“whale classifier”) to identify the call of a humpback whale within that
 19-year-long audio data set. During her presentation, Cattiau said that
 this was a big challenge for the researchers partly because the sound 
of a humpback whale can easily be mistaken for that of another type of 
whale or ships passing by. Google believes that its AI technology was 
successful and helpful in the project as listening for the call of a 
whale in a data set that vast is a task that would take a human being an
 inordinate amount of time to complete.

</p>



<p>Topher White, the CEO of Rainforest Connection, was one of the many partners invited by Google  to participate in the briefing. With the use of a proprietary  technology, Rainforest Connection prevents illegal deforestation by  listening for sounds of chainsaws and logging trucks in rainforests  across ten countries and alerting local authorities. Its technology  involves the use of refurbished solar-charged Android smartphones that  use Google TensorFlow to analyse the auditory data in real-time from  within a rainforest. According to White, deforestation is a bigger cause  of climate change than pollution caused by vehicles. </p>



<p>Febriadi Pratama, the Co-Founder of Gringgo Indonesia Foundation,
 was another one of the many partners invited by Google for the 
briefing. The foundation, which is a recipient of the Google AI Impact 
Challenge, is currently using Google&#8217;s ML models to identify types of 
waste material using image recognition in the Indonesian city of 
Denpasar. Pratama said during his speech that the project was 
effectively helping the foundation rake up plastic in a city where 
there&#8217;s no formal system for waste management.

</p>



<h2 class="wp-block-heading"><strong>Agriculture</strong></h2>



<p>Raghu Dharmaraju, Vice President of Products &amp; Programs at the  Wadhwani Institute for Artificial Intelligence, was also one of the  partners invited by Google to participate in the briefing. The institute  uses a proprietary Android app along with pheromone traps to scan  samples of crops for signs of pests, which, in a large farm in India,  can potentially wreck a farmer&#8217;s harvest. The app uses ML models developed by Google.  In his presentation, Dharmaraju said that the solution developed by the  institute was notably effective in detecting pink bollworms in cotton  crops in India. </p>



<h2 class="wp-block-heading"><strong>Flood forecasting</strong></h2>



<p>Sella Nevo, a Software Engineering Manager at Google AI, took the stage to talk about the company&#8217;s flood forecasting initiative. According to him, dated, low-resolution elevation maps make it hard to predict floods in any given area. SRTM,  the provider of elevation maps, hands out data that&#8217;s nearly two  decades old, he said during his presentation. In a pilot project started  last year in Patna, Google was able to produce high-definition  elevation maps using its ML models with the help of data taken from  satellites and other sources in order to forecast floods. It was then  able to alert its users about a flood incident in Gandhi Ghat. The flood  alert was sent out as a notification on smartphones. </p>



<p>“The number one issue is access to data, and we have tried to  tackle that. With different types of data, we find different solutions.  So, for the elevation maps, the data just doesn&#8217;t exist. So we worked on  different algorithms to produce and create that data for stream gauge  measurements. For various satellite data, we purchased and aggregated  most of it,” Nevo told us in an interview. According to him, Google is  trying to produce elevation maps that can be updated every year, unlike  the ones given out by SRTM. </p>



<h2 class="wp-block-heading"><strong>Accessibility</strong></h2>



<p>Sagar Savla, a Product Manager at Google AI, took the stage to talk about Google&#8217;s Live Transcribe  app. Available in 70 languages currently, the app helps the deaf and  hard-of-hearing communicate with others by transcribing speech in the  real world to on-screen text. The app is developed using Google&#8217;s ML  models to ensure precision in its transcription. For example, the app  can tell whether the user means to say “New Jersey” or “a new jersey”  depending on the context of the sentence. Talking about the app and its  development, Savla said that he had used it with his grandmother, who,  despite being hard of hearing, was able to join in on the conversation  using the Live Transcribe app in Gujarati. </p>



<p>Julie Cattiau returned to the stage to talk about Project Euphonia,  a Google initiative dedicated to building speech models that are  trained to understand people with impaired speech. The initiative could  in the future combine speech with computer vision, she said during her  presentation. For example, people who suffer from speech impairments  caused by neurological conditions could use gestures such as blinking to  communicate with others. Cattiau said that the company&#8217;s ML models are  currently being trained to recognise more gestures. </p>



<h2 class="wp-block-heading"><strong>Cultural Preservation</strong></h2>



<p>Tarin Clanuwat, a Project Researcher at the ROIS-DS Center for Open 
Data in the Humanities, went on stage about an ancient cursive Japanese 
script called Kuzushiji. Although there are millions of books and over a
 billion historical documents recorded in Kuzushiji, less than 0.01 
percent of the population can read it fluently today, she said during 
her presentation. She fears that this cultural heritage is currently at 
risk of becoming inaccessible in the future owing to disuse in modern 
texts.

</p>



<p>Google says that Turin and her fellow researchers trained an ML 
model to recognise Kuzushiji characters and transcribe them into modern 
Japanese. According to Google, the model takes approximately two seconds
 to transcribe an entire page and roughly an hour to transcribe an 
entire book. According to test data, the model is currently capable of 
detecting about 2,300 character types with an average accuracy of 85 
percent. Turin and her team are working towards improving the model in 
order to preserve the cultural heritage captured in Kuzushiji texts.

</p>



<h2 class="wp-block-heading"><strong>Summary</strong></h2>



<p>Google seems convinced it’s headed in the right direction when it 
comes to applying machine learning the right way for social causes. In 
the future, we can expect Google to take on more such projects, where 
neural networks are trained to understand data sets that hold keys and 
clues to hitherto insoluble problems in areas never tried before. At the
 same time, more and more developers and researchers should be able to 
incorporate Google’s open-source TensorFlow library in their projects as
 long as Google continues to provide support and reference material for 
it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/">Here&#8217;s how Google is putting AI to work in healthcare, environmental conservation, agriculture and more</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/heres-how-google-is-putting-ai-to-work-in-healthcare-environmental-conservation-agriculture-and-more/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
