<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Edge computing Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/edge-computing/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/edge-computing/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 29 May 2020 06:44:27 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Deep Learning Software Accelerator to Speed AI Deployments</title>
		<link>https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 29 May 2020 06:44:22 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Computing]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Edge computing]]></category>
		<category><![CDATA[Processor]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9110</guid>

					<description><![CDATA[<p>Source: eetindia.co.in Today, there is a lot of talk about artificial intelligence (AI). Any technological, electronic, recreational and IT sector is increasingly exploring AI. In particular, the world’s <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/">Deep Learning Software Accelerator to Speed AI Deployments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: eetindia.co.in</p>



<p>Today, there is a lot of talk about artificial intelligence (AI). Any technological, electronic, recreational and IT sector is increasingly exploring AI. In particular, the world’s attention is focused on deep learning, a specific sub-field of AI. It involves extrapolations of expressions from our brain, neural networks and virtual realities.</p>



<p>DeepCube provides a software-based inference accelerator that allows an efficient implementation of deep learning models on intelligent edge devices. DeepCube was co-founded by Eli David, a leading deep learning expert (see figure 1), and Yaron Eitan, a serial technology entrepreneur with over thirty years of experience. David has published over 50 papers in leading AI publications and previously co-founded another deep learning company, Deep Instinct. The DeepCube team is made up of 15 researchers, most of which are former masters and doctoral students. They have spent the last two years working to bring advanced technology that will have a major impact on deep learning deployment in the real world.</p>



<p>According to these scientists, most automated activities, such as self-driving cars or medical image analysis, are all based on deep learning. Such sophisticated deep learning models turn out to be quite large, which results in extremely heavy processing requirements and costs. For this purpose, it is necessary to use dedicated hardware, which is also very expensive, with a large amount of memory. For these reasons, it is almost impossible to implement these models directly on edge devices, such as mobile devices, drones, cameras or autonomous cars.</p>



<p>The problem can be solved by using third-party services to increase the computing power: external servers, clouds, remote PCs, etc. However, that system is not very practical. In addition to having the problem of high latency and the lack of real-time response (also due to the low bandwidth), the costs would be too high.</p>



<p>“Also, you don’t always have continuous connectivity with most edge devices. DeepCube is trying to fill that gap by creating a dedicated artificial intelligence accelerator. It achieves the same goal but 10-20 times faster, through software,” said Eli David, co-founder of DeepCube.</p>



<p>Through a series of revolutionary innovations, DeepCube has made deep learning models about 10 times leaner. This inevitably has some advantages: it makes them much faster, by a factor of about 10 times, and they consume less memory. Therefore, energy consumption decreases.</p>



<p><strong>Our brain as a reference</strong></p>



<p>DeepCube’s technology takes inspiration directly from the functioning of our brain. The human brain has more “connections” and better capacity between neurons not in adulthood, but when you are between two years of age and adolescence. Much of the learning does not happen by adding new connections, but by removing unnecessary or redundant connections. In this way, for any type of deep learning model during the training phase, more than 90% of connections are removed, while maintaining the same accuracy.</p>



<p>DeepCube’s proprietary software can be run on any existing platform of any type. It doesn’t matter if it is very slow or fast hardware, with Intel x86 or AMD architecture, ARM or Nvidia GPU, the speed will be an average of 10 times faster for any deep learning model. The founding team pointed out that, “accelerating inference is an easy way to measure improvement.”</p>



<p><strong>DeepCube is ready</strong></p>



<p>The technology is now mature and reliable (figure 2). Several series of demonstrations and tests have been successfully performed with the most important semiconductor manufacturers in the world, who have tested the technology on their hardware independently. The next step is to start a commercial partnership for solid collaborations, in order to distribute this technology more broadly.</p>



<p><strong>Some sample tests</strong></p>



<p>The main tests conducted thus far have focused on artificial vision models. For example, object recognition or facial or voice recognition. Another area of application is the natural language area, which includes the Bert model, one of the largest deep learning models and which requires several Gigabytes of memory. All these categories have been tested on different sub-models, chosen by potential partners. The lowest rate of acceleration was equal to a 5x factor, while the best rate was around 20x. On average it is about 10x, with variable results in accuracy.</p>



<p>Today, there are excellent models for the use of autonomous driving, particularly for the recognition of objects and pedestrians. But the real problem is related to the implementation of such models inside a car. You cannot use low-end hardware as this would lead to the danger of getting quick responses on the road. When driving a car, even one second could be too long to wait for the vehicle to respond to an object on the road. Some models also require a Gigabyte of memory and their insertion is impossible for most peripheral devices. With the possibility to make everything over 10 times smaller, implementation on smaller systems becomes a reality.</p>



<p>“There are many different ways in which companies are trying to deal with this problem, however, when most companies have tried to reduce the size of the model, they also lose accuracy,” said David. “Another approach that companies like Tesla are trying, is to create a dedicated chip that is capable of performing very complex processing inside cars,” he continued. Obviously, the latter solution does not offer much flexibility. In case of model improvements, new chips must be produced.</p>



<p>The world will increasingly demand higher accuracy, precision and the ability for machines to make important decisions on their own. It is not just a matter of speeding up this process, but a matter of having the possibility to implement advanced technology within limited dimensions. Think, for example, of a security camera or a drone that must decide in real time how to respond to certain situations.</p>



<p>In the short-term, DeepCube is focused on further maturing its technology that makes deep learning training models much faster, but that is just one pillar of deep learning that they hope to improve and achieve breakthroughs for over the next few years as they continue their research and implementation.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/">Deep Learning Software Accelerator to Speed AI Deployments</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-software-accelerator-to-speed-ai-deployments/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Proximity Computing? – Explanation and the 3 Types</title>
		<link>https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/</link>
					<comments>https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 18 Jul 2017 07:23:07 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[CDN]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Edge computing]]></category>
		<category><![CDATA[Proximity Computing]]></category>
		<category><![CDATA[Radio Access Network]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=138</guid>

					<description><![CDATA[<p>Source &#8211; iot-for-all.com Edge computing is a means of processing data physically close to where the data is being produced, i.e. where the things and humans are – <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/">What is Proximity Computing? – Explanation and the 3 Types</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>iot-for-all.com</strong></p>
<p>Edge computing is a means of processing data physically close to where the data is being produced, i.e. where the things and humans are – in the field area, homes, and remote offices.</p>
<p>Since they don’t live in the cloud, we need to complement cloud computing with many forms of computing at the edge to architect IoT solutions.</p>
<p>Discussions about edge computing often overlook how many <em>types of “edge” </em>computing there are. We’ll take a look at the <strong>fundamental drivers for edge computing</strong> and the many <strong>types of edges</strong> to consider.</p>
<p>Since we’re referring to computing close to the source of things, data, and action, I will use a more generic term for this type of data processing: <em>proximity computing</em>.</p>
<h3>The Economics of Proximity Computing</h3>
<p>Events in our world need a timely response either for good user experience (“changing TV channels”) or to avoid catastrophes (“gas line leak”).</p>
<p><img fetchpriority="high" decoding="async" class="progressiveMedia-image js-progressiveMedia-image aligncenter td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1600/1*wMzIQywphg0_Dk326bCOMQ.png" alt="proximity computing see process act (SPA)" width="511" height="351" /></p>
<p>As these events occur, we need to choreograph complex systems to Sense, Process, and Act (SPA). The cost of SPA’ing is a function of local vs. remote processing costs, network connectivity costs and remote systems management costs.</p>
<p>Size and power aside, <strong>proximity computing balances the timeliness of responding to an event with the cost</strong> of that response. Legal restrictions on data traversing jurisdictions further drive the need for proximity computing.</p>
<h3>Many-Edged Landscape – Types of Proximity Computing</h3>
<p>When it comes to optimal proximity computing, there are many edges to consider. I think of them as: <strong><em>the Personal Edge, the Business Edge, and </em><em>the</em><em>Cloudy Edge</em></strong>. These three edges deploy SPA (Sensing-Processing-Acting) to a varying set of problems in different environments, for optimal, automated response, like when your lawn mower is stolen.</p>
<p><img decoding="async" class="progressiveMedia-image js-progressiveMedia-image alignnone td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1600/1*U4vv-JPpdfeQ09GufPUxCA.png" alt="proximity computing - edge, edgier, edgiest" width="1369" height="668" /></p>
<p>We will explore each edge separately from personal to cloud and their drivers.</p>
<h4>1) The Personal Edge</h4>
<p>This Edge surrounds our person and is sometimes inside us. It’s at home. It comprises of home robots, smart eyeglasses, smart pills, sensors under your skin, watches, home automation systems, your Amazon Echo and your smartphones. Call it <strong><em>Alexa edge, or Siri edge</em></strong> or by your favorite gadget name.</p>
<p>Collectively this edge is mobile. Devices from the personal edge moves in and out of the business edge as we move between home and work.</p>
<p><img decoding="async" class="progressiveMedia-image js-progressiveMedia-image aligncenter td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1200/1*Pu5RVHdXQjNDThwGjiH4Xg.png" alt="proximity computing - personel edge" width="373" height="332" /></p>
<p>We’ll hear more about this edge in the next 5 years as intelligent home devices, digital health, and other personal devices proliferate.</p>
<h4>2) The Business Edge</h4>
<p>This is the most talked about Edge. Connected machines and people galore here. It’s in our carpeted offices, in uncarpeted spaces, out in the open where we work and play.</p>
<p>Many IoT discussions seem to presume that this is the only edge. Every IoT discussion extols the virtues of this Edge. So I’ll be brief. Mission-critical SPA (“Sensing-Processing-Acting”) goes on in this realm especially as Industrial IoT gathers momentum.</p>
<p><img loading="lazy" decoding="async" class="progressiveMedia-image js-progressiveMedia-image aligncenter td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1600/1*rI5DOn_hTNzycZR1avgf7Q.png" alt="proximity computing - business edge" width="607" height="315" /></p>
<p>Many vendors are providing application development environments to enable development of edge applications and analytics. <strong>Amazon Lambda Greengrass </strong>and<strong> Azure IoT hub</strong> are examples of such software.</p>
<h4>3) The Cloudy Edge</h4>
<p>This is the least talked about Edge, but the oldest edge. It’s a topological term in the service provider or enterprise <strong>network edge</strong> where traffic first entered from dial-up modems (yes, those screeching devices from the ‘90s!) in homes, or from remote branch offices.</p>
<p>This used to be a mere network-edge without any computing capacity. They were called PoPs (points-of-presence).</p>
<p><img loading="lazy" decoding="async" class="progressiveMedia-image js-progressiveMedia-image aligncenter td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1600/1*KtwM3i_VaKb-wPtD7m2img.png" alt="proximity computing - cloudy edge" width="569" height="306" /></p>
<p>The demand for application performance and content delivery required the network edge to add applications and data. Modern day Edge Data Centers fulfill that need. Content Delivery Networks (CDN) leverage them so that we get better page and video loads. This edge is now enhanced with Mobile Edge Computing, as mobile app performance needed to get better. We do it by colocating Radio Access Network (RAN) PoPs and applications.</p>
<p>So the old PoP got edgy with content and computing. SP edge, mobile edge, and enterprise edge came together to be the Cloud’s Edge. This edge still stays relevant to ensure application performance and smooth content delivery.</p>
<form id="mc4wp-form-1" class="mc4wp-form mc4wp-form-3235 mc4wp-form-styles-builder mc4wp-ajax" method="post" data-id="3235" data-name="Horizontal Newsletter Signup">
<div class="mc4wp-form-fields"></div>
<div class="mc4wp-response"> <span style="font-size: 24px; font-weight: bold;">Summary</span></div>
</form>
<p>There are many ways to describe what IoT is. They will all be right. Some more comprehensive than the other.</p>
<p>One such description is: <strong>IoT = Distributed artificial and human intelligence across a labyrinth of connected devices.</strong></p>
<p><img loading="lazy" decoding="async" class="progressiveMedia-image js-progressiveMedia-image alignnone td-animation-stack-type0-1" src="https://cdn-images-1.medium.com/max/1600/1*mEUaZYYrphkplV1EacQy-A.png" alt="proximity computing the internet of things" width="1251" height="566" /></p>
<p>Technologies like AWS Lambda and its edge cousin Greengrass are helping accelerate the creation of this distributed intelligence. How you create your distributed application intelligence across the personal, business, and cloud edge will depend on your application, the costs, and the regulations involved.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/">What is Proximity Computing? – Explanation and the 3 Types</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-proximity-computing-explanation-and-the-3-types/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
