<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>smartphones Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/smartphones/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/smartphones/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 02 Jul 2021 10:00:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>TOP 7 AI SMARTPHONES FOR AI ADMIRERS AVAILABLE IN THE MARKET</title>
		<link>https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/</link>
					<comments>https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 02 Jul 2021 10:00:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ADMIRERS]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AVAILABLE]]></category>
		<category><![CDATA[Market]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14705</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Analytics Insight presents the Best AI Smartphones with their Unique AI features Smartphones, with an AI combo, are one of the biggest trends in the <a class="read-more-link" href="https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/">TOP 7 AI SMARTPHONES FOR AI ADMIRERS AVAILABLE IN THE MARKET</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">Analytics Insight presents the Best AI Smartphones with their Unique AI features</h2>



<p>Smartphones, with an AI combo, are one of the biggest trends in the market today. AI has several uses on a handheld device, though the benefits are limited initially.  Mobile manufacturing companies are competing to develop AI features for Smartphones according to the needs of the users. Brands have been coming up with new AI features to attract their customers.</p>



<p>According to a report of CyberMedia Research in 2018, as many as three of five smartphones shipped in 2020 would have AI capabilities. This prediction turned out to be true as well. Let’s see what Top 7 AI smartphones that are available in the market.</p>



<ul class="wp-block-list"><li>LATEST MOBILES EQUIPPED WITH AI CAMERAS IN 2021</li><li>AUSTRALIA BRINGS THE WORLD’S FIRST AI CAMERAS TO DETECT DRIVERS USING PHONES</li></ul>



<h4 class="wp-block-heading">iPhone 12 Pro</h4>



<p>Apple released the new edition of its iPhones last October. The iPhone 12 Pro is a smartphone that is outlined with A14 bionic architecture with a 5nm processor technology. It has a quad-core GPU, a 6-core CPU, and faster machine learning accelerators. A14 Bionic features a 16-core neutral engine that can increase the 80% performing rate of AI smartphones. This iPhone 12 Pro is one of the AI smartphones that can complete 11 trillion operations per second. Apple says it to be the fastest chip in a smartphone.</p>



<p>The uniqueness of iPhones is their camera, the 12 Pro version did not compromise with its camera quality. The three lens structures remain stable in this new edition of AI smartphones too. It has a newly introduced LiDAR sensor and AI-powered new A14 Bionic image sensor that can help the user in capturing quality images even in low light conditions.</p>



<h4 class="wp-block-heading">Google Pixel 5</h4>



<p>Pixel 5 has become one of the most talked-about AI smartphones which Google is offering. The latest feature of Pixel 5 ‘Hold For Me’ is gaining wide popularity. The Google Assistant will wait on the call until the human user comes for interaction directly. If you’re dialing a toll-free number, Google Assistant will wait on your behalf until the human is on the line on the other side, it will alert you by hearing a human voice immediately and ask the other person to stay on the line. These AI features are powered by Google Duplex and can recognize call-hold and identify human voices.</p>



<h4 class="wp-block-heading">Samsung Galaxy S20</h4>



<p>Samsung, early this year released three new editions of AI smartphones such as Galaxy S20, Galaxy S20+, and Galaxy S20 Ultra. In these smartphones, Samsung has unveiled a new AI-powered camera system, which the company claims to be the biggest image sensor yet in smartphones. The company uses AI-powered zoom for its Space Zoom technology which is a combo of Hybrid Optic Zoom and Super-Resolution Zoom.</p>



<p>With this feature, the AI smartphone user can zoom up to 30times on the S20 and S20+ editions. The S20 Ultra uses a folded lens that uses AI-powered multi-image processing for a clearer view. These AI smartphones also have AI-motion analysis features to enhance the user’s experience.</p>



<h4 class="wp-block-heading">OnePlus 8T</h4>



<p>The OnePlus 8T offers a quad-camera system that has a 48MP camera with optics, a 16MP lens, and a monochrome lens for a studio-level photography experience. These OnePlus 8T AI smartphones have an advanced video stabilization system for producing vivid clips. The AI-based Video Portrait allows the users to capture vivid clips with a natural effect detecting the human presence in the frame for a blurry background.</p>



<h4 class="wp-block-heading">Huawei Mate 40 Pro</h4>



<p>Huawei Mate 40 Pro is one of the AI smartphones talked about in the market for its ‘reimagined’ NPU that has two big cores and a tiny core. Huawei claims that the upgraded NPU pushes the AI capabilities of the phone to new heights. This smartphone has camera quality, gesture controls, and object recognition. The most popular feature is its AI-enabled gesture controls, where the users can indulge in touch-free interactions such as browning through pictures, flipping on pages on the e-book, etc.</p>



<h4 class="wp-block-heading">Vivo Y51</h4>



<p>The new offerings of Vivo are its Y series. It is one of the mid-range AI smartphones that offer AI capabilities largely on camera and power fronts. Vivo Y51 has an AI-powered triple camera that has multiple shooting modes and photo processing algorithms. The IY51 has a 48MP resolution which Vivo claims to help click clear pictures day and night. When it comes to battery, Vivo Y51 has an AI power-saving battery, when charged once it can work for almost 14hours.&nbsp; According to the company, the phone has a reverse charging facility which is unique.</p>



<h4 class="wp-block-heading">Oppo Reno4</h4>



<p>Oppo Reno4 is one of the AI smartphones that uses a smart sensor that can identify the phone’s owner easily. The AI-enhanced sensor also helps in deciding how much content should be shown in the notification bar while enabling touchless activities. The other features of this smartphone include AI assistant, privacy and security, and touchless gesture operations.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/">TOP 7 AI SMARTPHONES FOR AI ADMIRERS AVAILABLE IN THE MARKET</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/top-7-ai-smartphones-for-ai-admirers-available-in-the-market/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</title>
		<link>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/</link>
					<comments>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 06:11:13 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[PAPERS]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13575</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ Artificial intelligence research is increasingly influencing the use of technology From our smartphones to cars and homes, artificial intelligence is increasingly touching our every <a class="read-more-link" href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading"><strong>Artificial intelligence research is increasingly influencing the use of technology</strong></h2>



<p>From our smartphones to cars and homes, artificial intelligence is increasingly touching our every walk of life. Applications of artificial intelligence have already proved disruptive across diverse industries, including manufacturing, healthcare, retail, etc. Considering these progresses, we can say artificial intelligence has evolved much impressively in recent years. Research around this technology has also surged and is impacting the way every individual and business interacts with AI technologies. Analytics Insight has listed 10 must look artificial intelligence research papers so far worth looking at now.</p>



<h4 class="wp-block-heading"><strong>Adam: A Method for Stochastic Optimization</strong></h4>



<p>Author(s): Diederik P. Kingma, Jimmy Ba</p>



<p>Adam is an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, and it is computationally efficient, invariant to a diagonal rescaling of the gradients, and has little memory requirements. It is well suited for problems that are large in terms of data and parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. Adam has been adopted as a default method of optimization algorithm for all those millions of neural networks that people train nowadays.</p>



<h4 class="wp-block-heading"><strong>Towards a Human-like Open-Domain Chatbot</strong></h4>



<p>Author(s): Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, RomalThoppilan, Zi Yang, ApoorvKulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le</p>



<p>This research paper presents Meena, a multi-turn open-domain chatbot that is trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize the perplexity of the next token. The researchers also propose a new human evaluation metric to capture key elements of a human-like multi-turn conversation, dubbed Sensibleness and Specificity Average (SSA).</p>



<h4 class="wp-block-heading"><strong>Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift</strong></h4>



<p>Author(s): Sergey Ioffe, Christian Szegedy</p>



<p>Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. The researchers refer to this phenomenon as “internal covariate shift”, and address the problem by normalizing layer inputs. Batch Normalization allows the researchers to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps and surpasses the original model by a significant margin.</p>



<h4 class="wp-block-heading"><strong>Large-scale Video Classification with Convolutional Neural Networks</strong></h4>



<p>Author(s): Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei</p>



<p>Convolutional Neural Networks (CNNs) have been considered as a powerful class of models for image recognition problems. Encouraged by these results, the researchers provide an extensive empirical evaluation of CNNs on large-scale video classification. This used a new dataset of 1 million YouTube videos belonging to 487 classes. Provided by IEEE Conference on Computer Vision and Pattern Recognition, this research paper has been cited by 865 times with a HIC score of 24 and a CV of 239.</p>



<h4 class="wp-block-heading"><strong>Beyond Accuracy: Behavioral Testing of NLP models with CheckList</strong></h4>



<p>Author(s): Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh</p>



<p>Through this research paper around artificial intelligence, the authors point out the inadequacies of existing approaches to evaluating the performance of NLP models. The principles of behavioural testing in software engineering inspired researchers to introduce CheckList, a task-agnostic methodology for testing NLP models. It involves a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to produce a large and diverse number of test cases quickly.</p>



<h4 class="wp-block-heading"><strong>Generative Adversarial Nets</strong></h4>



<p>Author(s): Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, YoshuaBengio</p>



<p>The authors in this AI research paper propose a new framework for estimating generative models via an adversarial process. They simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.</p>



<h4 class="wp-block-heading"><strong>Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</strong></h4>



<p>Author(s): Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun</p>



<p>Advances like SPPnet and Fast R-CNN have minimized the running time of state-of-the-art detection networks, exposing region proposal computation as a bottleneck. To this context, the authors introduce a Region Proposal Network (RPN), a fully convolutional network that simultaneously predicts object bounds and abjectness scores at each position. RPN shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals.</p>



<h4 class="wp-block-heading"><strong>A Review on Multi-Label Learning Algorithms</strong></h4>



<p>Author(s): Min-Ling Zhang, Zhi-Hua Zhou</p>



<p>Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. While there has been a significant amount of progress made toward the machine learning paradigm in the past decade, this paper aims to provide a timely review on this area with an emphasis on state-of-the-art multi-label learning algorithms.</p>



<h4 class="wp-block-heading"><strong>Neural Machine Translation by Jointly Learning to Align and Translate</strong></h4>



<p>Author(s): DzmitryBahdanau, Kyunghyun Cho, YoshuaBengio</p>



<p>Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belongs to a family of encoder-decoders. It involves an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation.</p>



<h4 class="wp-block-heading"><strong>Mastering the game of Go with deep neural networks and tree search</strong></h4>



<p>Author(s): David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, and others</p>



<p>The paper introduces a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves in the game of Go. Go has been perceived as the most challenging of classic games for artificial intelligence. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play.</p>
<p>The post <a href="https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/">10 MUST LOOK ARTIFICIAL INTELLIGENCE RESEARCH PAPERS SO FAR</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/10-must-look-artificial-intelligence-research-papers-so-far/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ARTIFICIAL INTELLIGENCE IS MAKING 3D HOLOGRAMS POSSIBLE ON SMARTPHONES</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 17 Mar 2021 06:19:30 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[3D]]></category>
		<category><![CDATA[Holograms]]></category>
		<category><![CDATA[making]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13556</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ This is another feather in the cap for artificial intelligence. A part of what we see in science fiction movies will soon become a <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/">ARTIFICIAL INTELLIGENCE IS MAKING 3D HOLOGRAMS POSSIBLE ON SMARTPHONES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h2 class="wp-block-heading">This is another feather in the cap for artificial intelligence.</h2>



<p>A part of what we see in science fiction movies will soon become a reality, thanks to artificial intelligence. Every time you saw people talking to holograms in sci-fi movies and thought to yourself “that would be awesome to have”, you just might be closer to that future. Smartphones will soon be able to create photorealistic 3D holograms with an AI model developed by a research team at MIT. This system determines the best way to generate holograms from a sequence of input images. This fascinating technology could have applications for VR and AR headsets. Unlike conventional 3D and VR displays that create the illusion of depth causing nausea and headaches, a holographic display can be viewed by people without straining their eyes.</p>



<p>A major challenge in creating holographic media is maintaining the data that is needed to create holographs. Every holograph constitutes huge amounts of data which creates the “depth” of the holographs. This is why creating a hologram demands lots of computing power. To simplify this process, researchers at MIT applied deep convolutional neural networks to the problem. This approach created a network that is capable of quickly generating holograms based on pictographic data.</p>



<h4 class="wp-block-heading"><strong>Past Vs Present</strong></h4>



<p>The traditional method of generating holograms creates many chunks of holograms and then uses scientific simulations to combine the chunks into a complete pictorial representation. This process is power-intensive and time-consuming. But according to the IEEE spectrum, the method designed by the team of researchers at MIT is a lot different. It uses deep learning networks to slice images into chunks that can be recompiled into holograms using fewer “slices” than that of the traditional methods. This is possible because of the convolutional neural network’s ability to analyze images and separate them into discrete chunks. This new method is far less power-intensive.</p>



<p>In order to design this artificial intelligence holographic generator, the MIT team began by creating a database that included approximately 4000 computer-generated images, with a matchable 3D hologram allotted to each of those images. Based on this dataset, the convolutional neural network was trained to learn the way each of those images was connected to its hologram. When the artificial intelligence system was given the unseen data with depth information, it was able to generate new holograms with the given data. For this process, the depth information is supplied to the AI system through the use of a lidar sensor of multi-camera displays that renders it as a computer-generated image. Some iPhones have these components which makes it possible to generate holograms if connected to the right type of display.</p>



<p>The new artificial intelligence hologram system needs less memory than the traditional methods. This system can create colored 3D holograms at a speed of 60 frames per second with a resolution of 1920 x 1080 using approximately 620 KB of memory, all this by running on a single graphics processing unit (GPU). The MIT research team was able to run their new AI technology on an iPhone 11 creating 1 hologram per second. They also tried it on a Google Edge TPU which could create 2 holograms per second. This implies that the artificial intelligence hologram system can have applications in volumetric 3D printing or in the designing of holographic microscopes.</p>



<p>This is just the inception of this technology. In the future, with further advancements, this technology might revolutionize our way of communication and perceiving visual data. It surely is an exciting time for the tech world.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/">ARTIFICIAL INTELLIGENCE IS MAKING 3D HOLOGRAMS POSSIBLE ON SMARTPHONES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-is-making-3d-holograms-possible-on-smartphones/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google AI Team Explains How Its Audio Recorder App Leverages On-Device Machine Learning</title>
		<link>https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Dec 2019 07:49:07 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Pixel]]></category>
		<category><![CDATA[smartphones]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5773</guid>

					<description><![CDATA[<p>Source: digitalinformationworld.com At the beginning of this month, the Recorder app of Pixel 4 was made available for older Google phones as well. The company has now explained the <a class="read-more-link" href="https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/">Google AI Team Explains How Its Audio Recorder App Leverages On-Device Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: digitalinformationworld.com</p>



<p> At the beginning of this month, the Recorder app of Pixel 4 was made available for older Google phones as well. The company has now explained the machine learning behind the on-device transcription tool.</p>



<p>A post on Google AI blog describes the rationale for creating the Recorder app. Speech is the most effective way of communication but there not sufficient ways for capturing and organizing it. The company wants to make ideas and conversations easy to search and be accessible.</p>



<p>According to Google, in the last two decades, they have made the search easier to be in the form of text, visual content, maps, videos or even jobs. Still, most of the important information is recorded and shared in the form of speech, like conversations, lectures, interviews and more. Though it is often difficult to extract the required information from the hours of recording.</p>



<p>The Recorder app has three parts. An automatic speech recognition model, which was first introduced in Gborad in March this year, is powered by transcription that is built on the all-neural on-device system. A “Faster voice typing” is included in the Android keyboard that can work offline after downloading and transcribes character by character.</p>



<p>Hours-long sessions can be recorded on the Recorder and the word mapping to timestamps has been computed by the speech recognition model. Through this, users can click on the word of their choice from the transcript and listen from where they want to. </p>



<p>

Though the text is a convenient form to present information but visual and sounds at times are more useful. Every bar of a waveform is of 50 milliseconds and is colored with the dominant sound in that period.</p>



<p>Audio is presented in a colored waveform in which each color identifies a different sound category. Convolutional Neural Networks (CNNs) are used to differentiate the sounds by comparing it with the already published datasets and classify each audio frame.</p>



<p>Also, Google allows three tags every time a recording ends that can be used to form a title for the video instead of using day and time. It helps the Recorder recognize the kind of content at the time of transcribing it.

</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/">Google AI Team Explains How Its Audio Recorder App Leverages On-Device Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-ai-team-explains-how-its-audio-recorder-app-leverages-on-device-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google AI allows smartphones to interpret sign language</title>
		<link>https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/</link>
					<comments>https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 22 Aug 2019 05:03:23 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4387</guid>

					<description><![CDATA[<p>Source: theinquirer.net GOOGLE HAS DESIGNED&#160;a series of algorithms&#160;that&#160;allow smartphone users to interpret sign language using their phone camera. Unusually for Google, it has opted not to create an <a class="read-more-link" href="https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/">Google AI allows smartphones to interpret sign language</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: theinquirer.net</p>



<p><strong>GOOGLE HAS DESIGNED</strong>&nbsp;a series of algorithms&nbsp;that&nbsp;allow smartphone users to interpret sign language using their phone camera.</p>



<p>Unusually for Google, it has opted not to create an app this time but has released the code into the open-source for developers to make their own.</p>



<p>This approach is most recognisable to Londoners, where there is no official transport app, but hundreds of third-party apps which use its comprehensive data.</p>



<p>The sign language uses a combination of a palm detector model, known as BlazePalm, a hand landmark model which works a bit like a fortune teller, only not much, and a gesture recogniser which can tell exactly what the hand is doing.</p>



<p>It works by dividing the hand into 21 grid points which give it just enough data to be able to tell a twist from a bend from a shrug.</p>



<p>The move has been cautiously welcomed by the deaf community, but some are questioning if machine learning can really pick up on the nuances of sign language, during complex conversations.</p>



<p>It may seem very literal to you and we, but sign language has as much subtlety as spoken language and if there&#8217;s one thing that we know about artificial intelligence, it&#8217;s that it&#8217;s totally rubbish at context, which could mean it gets the whole sentence about-face. And let&#8217;s not get started on idioms for now, shall we?</p>



<p>Additionally, there are local variations of sign language, different dialects like BSL and Makaton &#8211; a whole bunch of non-literal aspects which will all need to be incorporated and perfected before this becomes a viable commercial app.</p>



<p>Google is not alone. Microsoft is already working on something similar for its translation service, whilst other private companies continue to experiment in bridging the gap between verbal and non-verbal communications.</p>



<p>Rumours that the technology was originally built to help politicians in the UK tell their a*se from their elbow are as yet unconfirmed. µ</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/">Google AI allows smartphones to interpret sign language</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-ai-allows-smartphones-to-interpret-sign-language/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Nokia 2.2 running Android One launched for Rs 6,999</title>
		<link>https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/</link>
					<comments>https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 07 Jun 2019 06:41:27 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Android One]]></category>
		<category><![CDATA[HMD Global]]></category>
		<category><![CDATA[Nokia 2.2]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=3593</guid>

					<description><![CDATA[<p>Source:- businesstoday.in HMD Global, the official license of Nokia smartphones, has launched an AI (artificial intelligence) focused smartphone in the entry-level segment. The Nokia 2.2, along with <a class="read-more-link" href="https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/">Nokia 2.2 running Android One launched for Rs 6,999</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source:- businesstoday.in</p>
<div id="story-maincontent" class="ad-blocker-ajax tbl-forkorts-article">
<div class="story-right relatedstory">
<p>HMD Global, the official license of Nokia smartphones, has launched an AI (artificial intelligence) focused smartphone in the entry-level segment. The Nokia 2.2, along with Android One program, will have AI-powered low light imaging, AI-driven face unlock and a dedicated AI button. Running Android Pie our-of-the-box, this will be one of the first few devices to receive the Android Q upgrade. To be available from June 11, the Nokia 2.2 will be available in Tungsten Black and Steel for an introductory price of Rs 6,999 for the 2GB RAM and 16GB storage option and Rs7,999 for the 3GB RAM and 32GB storage variant. After June 30, 2019, the former will be available for Rs 7,699 and the latter for Rs 8,699.</p>
<p>&#8220;Nokia 2.2 is the most accessible smartphone in the India market today that gives you the promise of Android One and brings you the best and most secure Android experience, one that gets better over time. I&#8217;m happy to share that Nokia 2.2 is being announced in India first globally and we&#8217;re not stopping at that,&#8221; said Ajey Mehta, Vice President and Country Head &#8211; India, HMD Global.</p>
<p>The Nokia 2.2 features a 5.71-inch HD+ edge-to-edge display with selfie notch and 400-nits brightness. It houses a dedicated button for launching the Google Assistant. Running Android One, it is powered by quad-core MediaTek A22 CPU chipset and will be available in two capacities &#8211; 2GB RAM with 16GB storage and 3GB RAM with 32GB Storage. Nokia 2.2 is Android Q ready and will receive two years of OS upgrades and three years of monthly security updates, ensuring access to all the latest innovations from Android.</p>
<p>There is a 13-MP autofocus camera at the rear with a single flash and 5MP selfie camera. HMD Global claims to have incorporated AI-powered low-light image fusion, which captures multiple images simultaneously and through advanced algorithms creates a single image with more light, greater detail and less noise. It packs in a 3000mAh battery and is accompanied with a 5W charger.</p>
<p>Other features include face unlock and Google Lens. According to the company, biometric face-unlock uses advanced AI driven face-unlock uses deep learning algorithms and liveliness detection for an accurate and spoof-proof experience.</p>
<p>&#8220;We believe that the latest and greatest innovations in the industry should be available for everyone. With the Nokia 2.2, we&#8217;ve brought the pinnacle of AI experiences to more people than ever before. And including features like biometric face unlock with liveliness detection adding extra security to your phone, AI imaging, Google Lens and Google Assistant at the press of a button, we aim to revolutionise the way our fans interact with the phone. Nokia 2.2 joins our Android One family, and like all Nokia smartphones, offers an experience that stays fresh longer. With two years of OS updates and three years of monthly security updates guaranteed, Nokia 2.2 is Android Q ready and will just keep getting better,&#8221; said Juho Sarvikas, Chief Product Officer, HMD Global.</p>
</div>
</div>
<p>The post <a href="https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/">Nokia 2.2 running Android One launched for Rs 6,999</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/nokia-2-2-running-android-one-launched-for-rs-6999/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Rules to encourage well behaved artificial intelligence</title>
		<link>https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 23 Aug 2018 07:20:24 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[digital technology]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2777</guid>

					<description><![CDATA[<p>Source &#8211; cosmosmagazine.com My spine still shivers when I remember the nuclear stand-off between the Soviet Union and the United States in 1962. As a nine-year-old I felt <a class="read-more-link" href="https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/">Rules to encourage well behaved artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; cosmosmagazine.com</p>
<p>My spine still shivers when I remember the nuclear stand-off between the Soviet Union and the United States in 1962. As a nine-year-old I felt helpless in the face of two leaders poised to push the button.</p>
<p>It was MAD – mutually assured destruction – but sanity prevailed and by the end of the 1960s we had détente.</p>
<p>In the decades since I have felt comfortable with the dazzling march of technology that has reduced global poverty, given us longer lives, delivered the information superhighway and created my zero-emissions Tesla.</p>
<p>Yes, there are disappointments – the internet, for example, has not raised the calibre of conversation but instead has created echo chambers of bigotry and forums for lies and harassment.</p>
<p>But now for the first time since the 1960s something is tickling my worry beads: artificial intelligence. I fear AI’s capacity to undermine our human rights and civil liberties.</p>
<p>While AI has been in backroom development since the 1950s and increasingly implemented by businesses and government in the past few years, I believe 2018 will go down as the year the AI future arrived.</p>
<p>I am well aware of previous impressive developments such as an AI named <i>AlphaGo</i> beating the world Go champion, but I don’t play Go. I do, however, rely on my executive assistant. So this year, when Google publicly demonstrated a digital assistant named <i>Duplex</i> calling a hairdressing salon to make an appointment for its boss, speaking in a mellow female voice filled with human pauses and colloquialisms, I knew AI had arrived.</p>
<p>Shortly afterwards IBM demonstrated <i>Project Debater</i> arguing an unscripted topic against a skilled human. Some in the audience judged <i>Project Debater</i> the winner.</p>
<p>The simplest definition of AI is computer technology that can do tasks that ordinarily require human intelligence. More formally, AI is the combination of machine learning algorithms, big data and a training procedure. This mimics human intelligence: the combination of innate ability, access to knowledge and a teacher.</p>
<p>Also like humans, when it comes to AI there are the good, the bad and the ugly.</p>
<p>The good: digital assistants, medical AIs to diagnose cancer, satellite navigation that figures out the best way home and systems that somehow know that your credit card has been used fraudulently.</p>
<p>The bad: biases such as that discovered in the COMPAS risk-assessment software used to help judges in the US determine a sentence by forecasting the likelihood of a defendant reoffending. After two years of evaluation COMPAS was found to have overestimated re-offence rates for black defendants and underestimated re-offence rates for white defendants. Every human I know is biased, so why worry when an AI is biased? Because there is a good chance it will be replicated and sold by the millions, thus spreading the bias across the planet.</p>
<p>The ugly: think Orwell’s <i>1984</i>. Now look at the social credit score in China, where citizens are watched in the streets and monitored at home, losing points for littering or paying their bills late, and as a consequence being denied a bank loan or their right to travel.</p>
<p>So how can we utilise the good but avoid the bad and the ugly? We must actively manage the integration of AI into our human society like we have done with electricity, cars and medicines. Australia can lead the way, as we did for IVF by becoming the first country to collate and report on birth outcomes and the first to publish national ethics guidelines. To capture the benefits and avoid the pitfalls requires a public discussion. In July the Australian Human Rights Commission launched a project on human rights and digital technology. In my keynote speech I finished with the question: “What kind of society do we want to be?”</p>
<p>While the debate unfolds, here a few starting suggestions.</p>
<p>First, adopt a voluntary, consumer-led certification standard for commercial AI akin to the Fairtrade stamp for coffee. I call it the ‘Turing Certificate’, in honour of Alan Turing, the persecuted father of AI. It won’t stop criminals and rogue states but it will help with the smartphones and home assistants we choose to purchase.</p>
<p>Second, adopt the ‘Golden Rule’ proposed by the head of Australia’s Department of Home Affairs, Michael Pezzullo: that no one should be deprived of their fundamental rights, privileges or entitlements by a computer rather than an accountable human.</p>
<p>Third, never forget that AI is not actually human. It is a technology. We made it. We are in charge. Hence I propose the ‘Platinum Rule’: that every AI should have an off switch.</p>
<p>The post <a href="https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/">Rules to encourage well behaved artificial intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/rules-to-encourage-well-behaved-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Will Artificial Intelligence override human intelligence and experiences?</title>
		<link>https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/</link>
					<comments>https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 18 Jul 2018 06:31:37 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[AI algorithms]]></category>
		<category><![CDATA[machine learning algorithms]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2632</guid>

					<description><![CDATA[<p>Source &#8211; yourstory.com Doomsday theorists say Artificial Intelligence and the machines that use it will destroy mankind. While that may be a stretch, there is some merit in the argument <a class="read-more-link" href="https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/">Will Artificial Intelligence override human intelligence and experiences?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; yourstory.com</p>
<p>Doomsday theorists say Artificial Intelligence and the machines that use it will destroy mankind. While that may be a stretch, there is some merit in the argument that AI will surely challenge human thinking and behaviour – to what outcome, that remains to be seen.</p>
<p>Take the example of a ride-hailing app – Uber, Ola or any other. You, the passenger, and the driver may know the route to a destination, but both are compelled to follow the one calculated by the AI engine or else the driver is penalised. <strong>Human intelligence such as experience with traffic patterns, temporary blockages due to repairs and construction, or even shorter routes off the main road are not taken into consideration</strong>.</p>
<p>Seen at a macro level,<strong> would this create a loss of identity where human intelligence and experience of both the driver and the customer is overridden by a machine</strong>? In simple terms, here, a human has been displaced by AI. We may not accept it, but this is, in essence, <strong>a machine planning what is best for a human</strong>.</p>
<p>Man’s inventions have sought to reduce human effort and this is apparent now more than ever. Every time electronic mediums evolve, they tend to make services obsolete, and now, they are making human decision-making obsolete too.</p>
<p><strong>What is the basis that machines take decisions on? The answer here is data – hundreds and thousands of gigabytes of data is collected across the world</strong> – its collected when you walk into a store and buy a toothpaste, it is collected when you pay for a ticket online, its collected when you visit your doctor – the list is endless.</p>
<p>“Data helps you react fast to consumer needs and helps companies address them faster,” says Partha De Sarkar, CEO of Hinduja Global Solutions. He says statistical modelling, thanks to modern data libraries and computing power, has combined with AI and Machine Learning algorithms to throw up insights about a customer like never before.</p>
<p>AI is the beginning of a human-machine partnership but this partnership should start off with the coming together of many minds &#8211; sociologists, scientists, and engineers – who must deliberate on the effects of AI on communities and individuals.</p>
<blockquote><p>“In the end, it is the treatment of the data where biases creep in,” says Varun Mayya, co-founder of Avalon Labs. He says every founder must be responsible for the AI platforms they up even before they get consumers and clients to use them.</p></blockquote>
<p>Experiences make each person different, but that is not exactly how AI works. Its algorithms bucket humans in to different date types, disregarding cultures, and preferences.</p>
<h2><strong>The algorithm bias</strong></h2>
<p>The cognitive revolution, touted as the next best thing in AI, thus falls flat when engineers use data to typecast individuals in a data set. “<strong>It is important for those claiming to use AI for consumer services to work with psychologists and sociologists before claiming their systems are representative of all communities and races</strong>,” says Nischith Rastogi, co-founder of Locus, a logistics tech company.</p>
<p>One such example where machine learning models erred with biases is the underwriting of loans. The machine set higher rates for individuals who it thought came from certain communities, income brackets and geography, not taking into account individuals who had the ability to service a loan.</p>
<p>“<strong>Biases creep into AI fast. It is something that startups and corporates should be cognisant of</strong>,” says Nischith.</p>
<p>The question then is, why is data biased? It starts from the collection of this data and medium it is captured from &#8211; the smartphone.</p>
<p>Smartphones create billions of data points about our food habits, fitness regimens, conversations, shopping lists, and payments. Here are a few biases thrown in by AI &#8211;</p>
<p><strong>Entertainment</strong>: When several members of a family together watch an online streaming service, recommendations are based on past selections. Now, these may be of a particular individual and not necessarily what would serve a common interest.</p>
<p>According to a blog by PWC there’s a need to understand the bias in data, the strengths of the algorithms used, and “generalisability” of unseen data.</p>
<p>The blog adds that while the governance structure used for standard statistical models can be used for machine learning, there are a number of additional elements of software development that must be considered. PWC continues to warn that the tests machine learning models “go through” need to be significantly more robust, and a machine learning governance quality assurance framework will make developers more aware of statistical and software engineering constructs that the model operates within.</p>
<p>According to IBM, <strong>AI systems are only as good as the data we put into them. Poor data can contain racial, gender, or ideological biases and many AI systems continue to be trained using bad data, making it an on-going problem</strong>. “But we believe that bias can be tamed and that the AI systems that will tackle bias will be the most successful,” says IBM in its blog.</p>
<p><strong>Retail and fashion: </strong>If you shop for fashion or beauty products, there is not only peer pressure to contend with now, but that from AI recommendations as well. AI today tends to attack you with a plethora of choices. And with fashion comes its ugly cousin &#8211; body shaming!</p>
<p>Earlier, the written word, in the form of fashion magazines, carried bias with pictures, and now, the same biases are carried over when building AI recommendations. With younger individuals taking to smartphones, these ‘recommendations’ may lead to unreasonable expectations from oneself.</p>
<p><strong>Food and life:  </strong>Everyone wants to live healthy and it is widely understood that cultural moorings play a big role in what one can and cannot eat. The world of food and nutrition apps, however, tends to standardise profiles into broad strokes, and fitting people in broad data buckets.</p>
<p>“We are training our data models to be as robust as possible when it comes to recommendations. The algorithms learn only if developers ask the right questions,” says Tushar Vashist, co-founder of Healthifyme.</p>
<p>No wonder then that governments are beginning to sit up and take notice, and action. The UK parliament has commissioned a study on AI and the ethics surrounding its applications. The House of Lords-appointed <u>Committee</u> to “consider the economic, ethical and social implications of advances in artificial intelligence”, was set up on June 29, 2017, and will seek answers to five key questions:</p>
<ul>
<li>How does AI affect people in their everyday lives, and how is this likely to change?</li>
<li>What are the potential opportunities presented by Artificial Intelligence for the UK? How can these be realised?</li>
<li>What are the possible risks and implications of Artificial Intelligence?  How can these be avoided?</li>
<li>How should the public be engaged with in a responsible manner about AI?</li>
<li>What are the ethical issues presented by the development and use of Artificial Intelligence?</li>
</ul>
<p>There are also strong voices around the world on the need for regulatory bodies for Artificial Intelligence to study the ethics of AI.</p>
<p><strong>Back home, what do Indian policymakers have to say about this? Nothing much, is the simple answer.</strong></p>
<p>The Niti Aayog, which creates broad policy frameworks, is keen on creating opportunities for Indians to invest in AI, but is mum on the moral and ethical frameworks of the technology.</p>
<p>“We absolutely need auditability and explainability,” says K M Madhusudan, CTO of Mindtree. He adds there are two aspects to this &#8211; one is for serious enterprise-level AI adoption, for which technologists must ensure AI can explain why it made a particular decision. The second is to ensure it is not biased.</p>
<p>The list of biases can be endless. But, it’s time for startups using AI to wake up and smell reality. It is in their interest to do so because they will hopefully soon be liable for the instructions or recommendations made by the AI. To avoid this, one must venture into creating reams of data before providing choices to individuals. In the end we are just one big data set.</p>
<p>The post <a href="https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/">Will Artificial Intelligence override human intelligence and experiences?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/will-artificial-intelligence-override-human-intelligence-and-experiences/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial intelligence is learning to see in the dark</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 18 May 2018 05:51:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI algorithm]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[image sensors]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2401</guid>

					<description><![CDATA[<p>Source &#8211; qz.com Cameras—especially phone cameras—are terrible at taking pictures in the dark. The tiny image sensors in most modern cameras can only absorb a small amount of light, which <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/">Artificial intelligence is learning to see in the dark</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; qz.com</p>
<p>Cameras—especially phone cameras—are terrible at taking pictures in the dark. The tiny image sensors in most modern cameras can only absorb a small amount of light, which often results in dark, grainy images.</p>
<p>To try to solve this problem without inventing a new image sensor, researchers at Intel and the University of Illinois Urbana-Champlain taught an artificial intelligence algorithm how to take the data from darker images and reconstruct them so that they’re brighter and clearer, according to research published this month and to be presented in June at an industry conference.</p>
<p>To train the algorithm, the researchers showed it two version of more than 5,000 images taken in low-light scenarios: One set that was taken to be purposefully too dark, and one set that was taken with a longer exposure time, meaning the sensor is given more time to collect light and better expose the image. (To do that, you need to hold the camera extremely still for a few seconds or more, which is why it’s not practical in most picture-taking scenarios.)</p>
<p>The Intel and UIUC team claims the algorithm can now amplify low-light images the equivalent of up to 300 times the exposure, without the same noise and discoloration that programs like Photoshop might introduce or having to take two separate images.</p>
<p>While the team did build a custom algorithm to do the task, the most innovative aspect of the work is the dataset they created. In the paper, the researchers write that no dataset with low-light images at different exposures publicly exists. Chen Chen, a co-author on the paper who worked on the project as a part of an internship at Intel, says at first, they tried to get around having to take thousands of original images by printing out pictures of objects and then taking pictures of the printouts in low-light and well-lit scenarios. But in the end, that synthetic data didn’t produce good results, Chen says.</p>
<p>So, Chen spent two months collecting images of outdoor low-light scenarios, and a week collecting images in low-light indoor scenarios. He took photos with two kinds of consumer cameras that use different image processing methods to ensure the algorithm wouldn’t just learn to only work on one camera manufacturer’s technology.</p>
<p>But even though the data was generated using high-resolution digital cameras, the team found that the algorithm also improved underexposed images from an iPhone 6S—a sign that the low light capabilities of our smartphones might be only a software update away. To make that process even faster, the team has posted code and the dataset online, which can be found on Github.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/">Artificial intelligence is learning to see in the dark</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-is-learning-to-see-in-the-dark/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Smartphones &#8211; Artificial Intelligence and Machine Learning</title>
		<link>https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 23 Apr 2018 06:08:11 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Mining]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[data mining]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[smartphones]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2268</guid>

					<description><![CDATA[<p>Source &#8211; gsmarena.com Among other things, for a smartphone to be bang on trend these days it needs, a tall 18:9 display with minimum bezels (with a notch <a class="read-more-link" href="https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/">Smartphones &#8211; Artificial Intelligence and Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; gsmarena.com</p>
<p>Among other things, for a smartphone to be bang on trend these days it needs, a tall 18:9 display with minimum bezels (with a notch thrown in for good measure), a superb camera system and Artificial Intelligence and/or Machine Learning.</p>
<p>Artificial Intelligence and Machine Learning are buzzwords being adopted and applied throughout our smartphones makeup, from the System on a Chip, all the way through to the operating system. So, is it just marketing hype, science fiction or is there fact being the fiction? Read on, and we’ll provide a straightforward, and where possible jargon-free overview.</p>
<h3>So what’s the difference between Artificial Intelligence and Machine Learning</h3>
<p>Artificial Intelligence is best characterized as the ability for a machine to exhibit practices including learning, behavior, and communication with no discernible difference from ourselves. Surely this belongs in the realms of science fiction? Well, if we ended here, then we&#8217;d agree but let’s dig deeper.</p>
<p>Taking the AI generalization (General AI) described above let’s narrow it down and pick a specific area that is more relevant for our subject matter &#8211; for example, Image recognition, and we’ll call this Narrow AI.</p>
<p>Now, our smartphones didn’t all of a sudden develop the ability to recognize and differentiate between a car and a plate of food overnight.</p>
<p>It was taught. The ability for a smartphone to ‘truly’ learn something new in its purest form, that is, without intervention, is still a ways off.</p>
<p>I was part of a team that launched a loyalty card scheme for a major UK retailer, which today has circa 16 Million card holders. Now imagine the volume of data that we were collecting. A <i>customer</i> database containing all the information that all 16 million provided during the registration process including gender, age, children, address which we only added to overtime. A <i>transaction</i> database where every item purchased including date, time, store associated with that customer.</p>
<p>What insights and intelligence our systems would give us &#8211; but the reality was somewhat different. We didn’t arrive at the office one day, to discover our insight systems had given a ‘truth’ or trend that we hadn’t contemplated. No, our insights were directly answering questions that we asked &#8211; how many women, matching a particular demographic, hadn’t purchased a specific brand of perfume for example. We could take that insight and attempt to brand switch them. This was data mining, albeit on a massive scale with rules and logic created by us.</p>
<p>Now back to our smartphones, this is where Machine Learning enters the frame. Taking a practical example &#8211; Apple Photos People Album and let’s assume for one moment that we’ve never ‘tagged’ anybody previously.</p>
<p>When you first view the People Album it only shows photos where the geometry of a ‘face’ has been identified, no names.</p>
<ul>
<li>Pick a face with no name, select it and give it a name.</li>
<li>It will then attempt to confirm that another face is this person, at this point it’s only got the first face to work with so the offered up face the second time may be way off. So you tell it ‘yes’ or ‘no’ and repeat.eat.</li>
</ul>
<p>With every iteration, it’s learning more and more about that face from different angles, varying hairstyles, and what happens as it ages and so on. You, therefore, reach a point where you take a picture of that person, and it’s automatically tagged with the right name.</p>
<p>This is a prime example of Machine Learning being used to enabled Narrow Artificial Intelligence.</p>
<p>In our above example, we put in the effort to teach our smartphone about the faces of our friends as they’re unique to us. In our camera apps and supporting silicon that effort has already been put in by other companies providing a baseline. They’ve utilized Machine Learning to log and categorize a plethora of photos, so when you frame that perfect plate of food you’re about to eat, your smartphone knows and applies the appropriate filters to take the best possible photo and tag it as a particular food type.</p>
<p>In the future, these Narrow AI areas will be expanded to better work together. Sticking with our face recognition theme, the iPhone X uses AI within Face ID to learn your face to unlock your iPhone in a multitude of different scenarios. Imagine a future where that process is automatically extended to the Photos app to better know about your face to assist in either the initial recognition phase or the addition of further pictures.</p>
<h3>Dedicated Artifical Intelligence chipsets</h3>
<p>At this juncture it’s pertinent to talk about silicon. When manufactures reference AI elements within their silicon, consider this in a similar vein to the Graphics Processing Unit (GPU). Whereas the GPU providers developers with a set of efficient accelerated API’s, for example, to display a polygon, within a specific coordinate space, and with a colored texture. The Artificial Intelligence silicon provides an efficient accelerated set of API’s that via neural networking support AI-related tasks.</p>
<p>Examples of chipsets including AI related hardware include;</p>
<ul>
<li>Huawei’s HiSilicon Kirin 970 neural processing unit (NPU)</li>
<li>Qualcomm&#8217;s Snapdragon 845 Hexagon 685 DSP AI platform</li>
<li>Apple&#8217;s A11 Bionic Neural Engine</li>
</ul>
<p>Their neural network hardware can perform up to 100&#8217;s of billion operations per second.</p>
<p>Fret not though, if your smartphones chipset doesn&#8217;t contain any dedicated AI silicon, the process will be undertaken in software. It will be less efficient as it can&#8217;t call on the support of the dedicated accelerated silicon but will use the GPU primarily and in some cases the CPU.</p>
<h3>Developers</h3>
<p>Until recently, while AI was accessible by the OS for its built-in apps and processes, it was harder for developers to implement local on-device AI tasks within their apps. In order to do so, they had to bring their own AI along for the ride or plug into a 3rd party provided framework such as Amazon&#8217;s AWS Machine Learning. The landscape has changed though as both Android 8.1 and iOS 11 provide API&#8217;s allowing developers to bring Machine Learning easily into their apps.</p>
<p><b>Android 8.1</b></p>
<p>The Android Neural Networks API (NNAPI) is designed for running computationally intensive operations for machine learning. NNAPI is designed to provide a base layer of functionality for higher-level machine learning frameworks (such as TensorFlow Lite, Caffe2, or others) that build and train neural networks.</p>
<p><b>iOS 11</b></p>
<p>Core ML is foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType.</p>
<h3>Privacy</h3>
<p>Companies live and die by our data privacy utilizing some different approaches to ensure this. For some, the data never leaves the phone, and if it does, it’s tokenized. While for others encrypted data in the cloud provides additional opportunities to enhance and enrich the experience.</p>
<p>Moving forward, a level of permission sharing among friends and family will provide additional time-saving benefits to us all. Sticking with our theme of face recognition, sharing and receiving all the learning undertaken by a family and close friends ensures that if someone has put effort into teaching their photo library about their sons face, that knowledge is passed onto automatically to be applied to ALL their libraries.</p>
<p>We’ve focused heavily on photography during this piece to keep both the article short and avoid jumping around, but other areas of AI and ML apply to:</p>
<ul>
<li>Natural language understanding including speech and hand writing recognition</li>
<li>Utilise smartphone sensors to better understand what’s happening in the users environment</li>
<li>Predictive interfaces, user workflow and content censorship/parental controls</li>
<li>Phone security</li>
<li>Enhanced image processing</li>
<li>Augmented Reality and AI vision</li>
<li>On device app/system management to further maximize battery life.</li>
</ul>
<p>One final point before we sign off, if we ever see standards emerge in this arena it can potentially allow your intelligence to be shared between different manufactures devices and services. We know &#8211; wishful thinking.</p>
<p>So don’t worry just yet. Your smartphone isn’t capable of becoming self-aware in a similar vein to Skynet and tries to terminate you when you next lift your phone to your ear. What we are seeing is that now more than ever before, the ‘smart’ part of our phones is truly starting to earn its moniker. So, for now, let’s enjoy all the benefits that Artificial Intelligence through Machine Learning brings to our everyday interactions with our devices.</p>
<p>&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/">Smartphones &#8211; Artificial Intelligence and Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/smartphones-artificial-intelligence-and-machine-learning/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
	</channel>
</rss>
