<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Intel Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/intel/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/intel/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 17 Jul 2020 06:01:01 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch</title>
		<link>https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/</link>
					<comments>https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Jul 2020 06:00:58 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[NEUROMORPHIC]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10243</guid>

					<description><![CDATA[<p>Source: enterpriseai.news Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, <a class="read-more-link" href="https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/">Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterpriseai.news</p>



<p>Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, visual-tactile perception system that uses Intel’s Loihi chip to control a robotic arm combining tactile sensing and vision. Noteworthy, they also ran the exercise on a GPU system and reported the Loihi-based the system performed slightly better and at much lower power.</p>



<p>NUS researchers presented their results today at the virtual Robotics Science and Systems conference being held this week. The combination of tactile sensing (grip) with vision (location) is expected to significantly enhance Robotic arm precision and delicacy of grip when handling objects. The use of neuromorphic technology also promises progress in efforts to reduce the power consumption required for robotics which is a central goal for neuromorphic technology.</p>



<p>“We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step toward building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations,” said Harold Soh a NUS professor and author on a paper describing the work (Event-Driven Visual-Tactile Sensing and Learning for Robots).</p>



<p>Intel has long been at the forefront of efforts to commercialize neuromorphic technology, and its Loihi  (chip)/Pohoiki (system) is among the most developed platforms. Neuromorphic systems mimic natural systems such as the brain in that they use spiking neural networks (SNN) to process information instead of the artificial neural networks (ANN) more commonly used in machine/deep learning.</p>



<p>Mike Davies, director of Intel’s Neuromorphic Computing Lab, said, “This research from National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture.” Intel also posted an account of the work.</p>



<p>This excerpt from the NUS paper nicely describes the challenge and contribution:</p>



<p>“Many everyday tasks require multiple sensory modalities to perform successfully. For example, consider fetching a carton of soymilk from the fridge humans use vision to locate the carton and can infer from a simple grasp how much liquid the carton contains. They can then use their sense of sight and touch to lift the object without letting it slip. These actions (and inferences) are performed robustly using a power-efficient neural substrate—compared to the multi-modal deep neural networks used in current artificial systems, human brains require far less energy.</p>



<p>“In this work, we take crucial steps towards efficient visual-tactile perception for robotic systems. We gain inspiration from biological systems, which are asynchronous and event- driven. In contrast to resource-hungry deep learning methods, event-driven perception forms an alternative approach that promises power-efficiency and low-latency—features that are ideal for real-time mobile robots. However, event-driven systems remain under-developed relative to standard synchronous perception methods.”</p>



<p>The value of multi-modal sensing has long been recognized as an important component for advancing robotics. However, limitations in the use of spiking neural networks have impeded the use of neuromorphic chips in real-time sensing functions.</p>



<p>“Event-based sensors have been successfully used in conjunction with deep learning techniques. The binary events are first converted into real-valued tensors, which are processed downstream by deep ANNs (artificial neural networks). This approach generally yields good models (e.g., for motion segmentation, optical flow estimation, and car steering prediction, but at high compute cost,” write the researchers.</p>



<p>“Neuromorphic learning, specifically Spiking Neural Networks (SNNs), provide a competing approach for learning with event data. Similar to event-based sensors, SNNs work directly with discrete spikes and hence, possess similar characteristics, i.e., low latency, high temporal resolution and low power consumption. Historically, SNNs have been hampered by the lack of a good training procedure. Gradient-based methods such as backpropagation were not available because spikes are non-differentiable. Recent developments in effective SNN training, and the nascent availability of neuromorphic hardware (e.g., IBM TrueNorth and Intel Loihi) have renewed interest in neuromorphic learning for various applications, including robotics. SNNs do not yet consistently outperform their deep ANN cousins on pseudo-event image datasets, and the research community is actively exploring better training methods for real event-data.”</p>



<p>Another obstacle was simply developing adequate tactile sensing devices. “Although there are numerous applications for tactile sensors (e.g., minimal invasive surgery and smart prosthetics), tactile sensing technology lags behind vision. In particular, current tactile sensors remain difficult to scale and integrate with robot platforms. The reasons are twofold: first, many tactile sensors are interfaced via time-divisional multiple access (TDMA), where individual taxels are periodically and sequentially sampled. The serial readout nature of TDMA inherently leads to an increase of readout latency as the number of taxels in the sensor is increased. Second, high spatial localization accuracy is typically achieved by adding more taxels in the sensor; this invariably leads to more wiring, which complicates integration of the skin onto robot end- effectors and surfaces,” according to the paper.</p>



<p>The researchers developed their own a novel “neuro-inspired” tactile sensor (NeuTouch): “The structure of NeuTouch is akin to a human fingertip: it comprises “skin”, and “bone”, and has a physical dimension of 37×21×13 mm. This design facilitates integration with anthropomorphic end-effectors (for prosthetics or humanoid robots) and standard multi-finger grippers; in our experiments, we use NeuTouch with a Robotiq 2F-140 gripper. We focused on a fingertip design in this paper, but alternative structures can be developed to suit different applications,” wrote the researchers.</p>



<p>NeuTouch’s tactile sensing is achieved via a layer of electrodes with 39 taxels and a graphene-based piezoresistive thin film. The taxels are elliptically-shaped to resemble the human fingertip’s fast-adapting (FA) mechano-receptors, and are radially-arranged with density varied from high to low, from the center to the periphery of the sensor.</p>



<p>“During typical grasps, NeuTouch (with its convex surface) tends to make initial contact with objects at its central region where the taxel density is the highest. Correspondingly, rich tactile data can be captured in the earlier phase of tactile sensing, which may help accelerate inference (e.g., for early classification). The graphene-based pressure transducer forms an effective tactile sensor, due to its high Young’s modulus, which helps to reduce the transducer’s hysteresis and response time,” report the researchers.</p>



<p>The primary goal, say the researchers, was to determine if their multi-modal system was effective at detecting differences in objects that were difficult to isolate using a single sensor, and whether the weight spike-count loss resulted in better early classification performance. “Note that our objective was not to derive the best possible classifier; indeed, we did not include proprioceptive data which would likely have improved results, nor conduct an exhaustive (and computationally expensive) search for the best architecture. Rather, we sought to understand the potential benefits of using both visual and tactile spiking data in a reasonable setup.”</p>



<p>They used four different containers: a coffee can, Pepsi bottle, cardboard soy milk carton, and metal tuna can. The robot was used to grasp and lift each object 15 times and classify the object and determine its weight. The multi-modal SNN model achieved the highest score (81percent) which was about ten percent better than any of the single mode tests.</p>



<p>In terms of comparing the Loihi neuromorphic chip with the GPU (Nvidia GeForce RTX 2080), their overall performance was broadly similar but the Loihi-based system used far less power (see table). The latest work is significant step forward.</p>



<p>It’s best to read the full paper but here is an overview of the experiment taken from the paper.</p>



<ul class="wp-block-list"><li><strong>Robot Motion.</strong>&nbsp;The robot would grasp and lift each object class fifteen times, yielding 15 samples per class. Trajectories for each part of the motion was computed using the MoveIt Cartesian Pose Controller. Briefly, the robot gripper was initialized 10cm above each object’s designated grasp point. The end-effector was then moved to the grasp position (2 seconds) and the gripper was closed using the Robotiq grasp controller (4 seconds). The gripper then lifted the object by 5cm (2 seconds) and held it for 0.5 seconds.</li><li><strong>Data Pre-processing.</strong>&nbsp;For both modalities, we selected data from the grasping, lifting and holding phases (corresponding to the 2.0s to 8.5s window in Figure 4), and set a bin duration of 0.02s (325 bins) and a binning threshold value Smin = 1. We used stratified K-folds to create 5 splits; each split contained 240 training and 60 test examples with equal class distribution.</li><li><strong>Classification Models.</strong>&nbsp;We compared the SNNs against conventional deep learning, specifically Multi-layer Perceptrons (MLPs) with Gated Recurrent Units (GRUs) [54] and 3D convolutional neural networks (CNN-3D) [55]. We trained each model using (i) the tactile data only, (ii) the visual data only, and (iii) the combined visual-tactile data. Note that the SNN model on the combined data corresponds to the VT-SNN. When training on a single modality, we use Visual or Tactile SNN as appropriate. We implemented all the models using PyTorch.</li></ul>
<p>The post <a href="https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/">Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/get-a-grip-intel-neuromorphic-chip-used-to-give-robotics-arm-a-sense-of-touch/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ForwardX Robotics Partners with Intel to Deliver Automation Solutions</title>
		<link>https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/</link>
					<comments>https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 15 Jul 2020 05:13:53 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[ForwardX]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Matrix platform]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10175</guid>

					<description><![CDATA[<p>Source: dcvelocity.com BEIJING, CHINA – Jul. 14, 2020 – ForwardX Robotics, a key player in the development of Autonomous Mobile Robots, today announced it has joined Intel <a class="read-more-link" href="https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/">ForwardX Robotics Partners with Intel to Deliver Automation Solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dcvelocity.com</p>



<p>BEIJING, CHINA – Jul. 14, 2020 – ForwardX Robotics, a key player in the development of Autonomous Mobile Robots, today announced it has joined Intel IoT Solutions Alliance as an Intel IoT Market Ready Solutions (IMRS) partner. Joining a vast network of over 6,000 solutions and more than 500 IoT leaders, ForwardX’s Matrix platform will now benefit from Intel’s worldwide reach as the company continues to deliver its FLEX and MAX solutions to supply chain environments across the globe. The partnership reinforces ForwardX’s ability to address customer pain points and deliver robust solutions across increasingly competitive industries.</p>



<p>“A giant in the world of innovation, Intel has the expertise, experience, and means to deliver success far and wide. With our ambitions to reach any and every company that may benefit from our innovative solutions, a partnership with Intel is an exciting opportunity for ForwardX to explore,” said Shuo Zhang, Project Management Officer at ForwardX. “As part of Intel’s IoT Solutions Alliance, we will benefit on both the technological and commercial sides of our business.”</p>



<p>Intel’s IoT Solutions Alliance is one of six partnership programs currently offered and consists of a network of global partners and members working together to accelerate IoT adoption across a wide range of industries. Intel forecasts 55% of all data will be generated by IoT by 2025 and 43% of AI tasks will happen on edge devices by 2023. Given the disruptive potential of IoT, Intel’s IoT Solutions Alliance offers companies an abundance of scalable, interoperable solutions in order to accelerate the deployment of intelligent devices and end-to-end analytics. According to Intel’s website, the IoT Solutions Alliance aims to “enable a more intelligent Internet of Things (IoT), supporting enterprises that are moving to the edge so they can capture more data, analyze it faster, and act on it sooner.” While Intel’s IoT Solutions Alliance contains more than 6000 solutions, only 135 have been recognized as IoT Market Ready Solutions; solutions granted this status must clearly demonstrate scalability, adaptability, and immediate implementation opportunities across multiple industries.</p>



<p>With the partnership now formalized, ForwardX will work with Intel on innovative initiatives as well as the continued delivery of cutting-edge robotics solutions to logistics and manufacturing functions across electronics, automotive, general merchandise, and apparel industries. As part of the partnership, ForwardX will also benefit from the network’s collective know-how in innovative technology development and sales enablement allowing it to accelerate new market opportunities.</p>



<p>ForwardX’s FLEX and MAX solutions utilize 8th Generation Intel Core i7 Processors for onboard processing as well as Intel’s RealSense technology as part of the perception capabilities of ForwardX’s visual Autonomous Mobile Robots. Employing Intel’s leading technology allows ForwardX robots to gather and process a richer understanding of their environment resulting in more robust solutions. Intel technology can be found in the following ForwardX products:</p>



<p>&#8211; ForwardX FLEX Standard<br>&#8211; ForwardX FLEX RFID<br>&#8211; ForwardX FLEX Double-Deck<br>&#8211; ForwardX MAX 200 Lift<br>&#8211; ForwardX MAX 500 Standard<br>&#8211; ForwardX MAX 500 Lift</p>



<p>“As an innovative startup, our focus is on developing and delivering disruptive technologies to solve problems in traditional industries. In these long-established, complex industries, the ability to utilize the expertise and experience of a leader like Intel is truly invaluable,” said Nicolas Chee, founder and CEO at ForwardX Robotics. “Moving forward, we are hopeful that this partnership will demonstrate ForwardX’s willingness to create win-win partnerships with likeminded leaders.”</p>



<p>About ForwardX Robotics<br>ForwardX Robotics is a global technology leader in the fields of AI and Robotics, possessing over 160 patents pending and a team of over 180, including 120 engineers of which 10 hold PhDs. With team members hailing from top universities and leading companies, ForwardX is comprised of the world’s top computer vision scientists and robotics experts as shown by its award-winning work, such as 2 1st-Place Prizes at TRECVID and 1st Place Prize for IEEE’s VOT-RT.</p>
<p>The post <a href="https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/">ForwardX Robotics Partners with Intel to Deliver Automation Solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/forwardx-robotics-partners-with-intel-to-deliver-automation-solutions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel Launches First Artificial Intelligence Associate Degree Program</title>
		<link>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/</link>
					<comments>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 30 Jun 2020 08:44:59 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[automotive]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9860</guid>

					<description><![CDATA[<p>Source: enterpriseai.news TEMPE, Ariz., June 29, 2020 &#8212; Intel is partnering with Maricopa County Community College District (MCCCD) to launch the first Intel-designed artificial intelligence (AI) associate <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/">Intel Launches First Artificial Intelligence Associate Degree Program</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterpriseai.news</p>



<p>TEMPE, Ariz., June 29, 2020 &#8212; Intel is partnering with Maricopa County Community College District (MCCCD) to launch the first Intel-designed artificial intelligence (AI) associate degree program in the United States. The Arizona Commerce Authority will also provide a workforce grant of $100,000 to support the program. It will enable tens of thousands of students to land careers in high-tech, healthcare, automotive, industrial and aerospace fields.</p>



<p>“We strongly believe AI technology should be shaped by many voices representing different experiences and backgrounds. Community colleges offer the opportunity to expand and diversify AI since they attract a diverse array of students with a variety of backgrounds and expertise. Intel is committed to partnering with educational institutions to expand access to technology skills needed for current and future jobs,” said Gregory Bryant, Intel executive vice president and general manager of the Client Computing Group.</p>



<p>Based in Tempe, Arizona, MCCCD is the largest community college district in the U.S. with an estimated enrollment of more than 100,000 students across 10 campuses and 10,000 faculty and staff members.</p>



<p>The AI program consists of courses that have been developed by MCCCD’s faculty and Intel leaders based on Intel software and tools such as the Intel Distribution of OpenVINO Toolkit and Intel Python. Intel will also contribute technical advice, faculty training, summer internships and Intel mentors for both students and faculty members. Students will learn fundamental skills such as data collection, AI model training, coding and exploration of AI technology’s societal impact. The program includes a social impact AI project that is developed with guidance from teachers and Intel mentors. Upon completion, MCCCD will offer an associate degree in artificial intelligence that can be transferred to a four-year college.</p>



<p>AI technology is rapidly accelerating with new tools, technology and applications requiring workers to learn new skills. Recent studies show the demand for artificial intelligence skills is expected to grow exponentially. A 2020 LinkedIn report notes that AI skills are one of the top five most in-demand hard skills. Research by MCCCD Workforce and Economic Development Office estimates an increase of 22.4 percent for these roles by 2029.</p>



<p>As of early June 2020, more than 43 million Americans have filed for unemployment benefits. Furthermore, a recent McKinsey study estimates that over 57 million jobs are vulnerable, meaning they are subject to furloughs, layoffs or being rendered unproductive. It is critical for educational institutions and corporations to collaborate to prepare for future workforce demands.</p>



<p>The program’s first phase will be piloted online at Estrella Mountain Community College and Chandler Gilbert Community College in fall 2020. As physical distancing requirements are lifted and the concerns of the COVID-19 pandemic decrease, classes will begin in-person at both campuses.</p>



<p>This expands on the Intel AI for Youth program, which provides AI curriculum and resources to over 100,000 high school and vocational students in nine countries and will continue to scale globally. (Read, “AI for Youth Uses Intel Technology to Solve Real-World Problems.”) Additionally, Intel recently collaborated with Udacity to create the Intel Edge AI for IoT Developers Nanodegree Program aimed at training 1 million developers. Intel has a commitment to expand digital readiness to reach 30 million people in 30,000 institutions in 30 countries. This builds on the company’s recently announced 2030 goals and Global Impact Challenges that reinforce its commitment to making technology fully inclusive and expand digital readiness.</p>



<p>Intel’s corporate responsibility and positive global impact work is embedded in its purpose to create world-changing technology that enriches the lives of every person on Earth. By leveraging its position in the technology ecosystem, Intel can help customers and partners achieve their own aspirations and accelerate progress on key topics across the technology industry.</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/">Intel Launches First Artificial Intelligence Associate Degree Program</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel Launches First Artificial Intelligence Associate Degree Program</title>
		<link>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/</link>
					<comments>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 29 Jun 2020 07:53:30 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[program]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9836</guid>

					<description><![CDATA[<p>Source: indiaeducationdiary.in Intel is partnering with Maricopa County Community College District (MCCCD) to launch the first Intel-designed artificial intelligence (AI) associate degree program in the United States. <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/">Intel Launches First Artificial Intelligence Associate Degree Program</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: indiaeducationdiary.in</p>



<p>Intel is partnering with Maricopa County Community College District (MCCCD) to launch the first Intel-designed artificial intelligence (AI) associate degree program in the United States. The Arizona Commerce Authority will also provide a workforce grant of $100,000 to support the program. It will enable tens of thousands of students to land careers in high-tech, healthcare, automotive, industrial and aerospace fields.</p>



<p><strong>Whom It Helps:</strong> Based in Tempe, Arizona, MCCCD is the largest community college district in the U.S. with an estimated enrollment of more than 100,000 students across 10 campuses and 10,000 faculty and staff members.</p>



<p><strong>How It Helps: </strong>The AI program consists of courses that have been developed by MCCCD’s faculty and Intel leaders based on Intel software and tools such as the Intel® Distribution of OpenVINO™ Toolkit and Intel Python. Intel will also contribute technical advice, faculty training, summer internships and Intel mentors for both students and faculty members. Students will learn fundamental skills such as data collection, AI model training, coding and exploration of AI technology’s societal impact. The program includes a social impact AI project that is developed with guidance from teachers and Intel mentors. Upon completion, MCCCD will offer an associate degree in artificial intelligence that can be transferred to a four-year college.</p>



<p><strong>Why It’s Important:</strong> AI technology is rapidly accelerating with new tools, technology and applications requiring workers to learn new skills. Recent studies show the demand for artificial intelligence skills is expected to grow exponentially. A 2020 LinkedIn report notes that AI skills are one of the top five most in-demand hard skills. Research by MCCCD Workforce and Economic Development Office estimates an increase of 22.4 percent for these roles by 2029.</p>



<p>As of early June 2020, more than 43 million Americans have filed for unemployment benefits. Furthermore, a recent McKinsey study estimates that over 57 million jobs are vulnerable, meaning they are subject to furloughs, layoffs or being rendered unproductive. It is critical for educational institutions and corporations to collaborate to prepare for future workforce demands.</p>



<p><strong>About AI Program Launch Details:</strong>&nbsp;The program’s first phase will be piloted online at Estrella Mountain Community College and Chandler Gilbert Community College in fall 2020.&nbsp;As physical distancing requirements are lifted and the concerns of the COVID-19 pandemic decrease, classes will begin in-person at both campuses.</p>



<p><strong>More Context: </strong>This expands on the Intel® AI for Youth program, which provides AI curriculum and resources to over 100,000 high school and vocational students in nine countries and will continue to scale globally. (Read, “AI for Youth Uses Intel Technology to Solve Real-World Problems.”) Additionally, Intel recently collaborated with Udacity to create the Intel Edge AI for IoT Developers Nanodegree Program aimed at training 1 million developers. Intel has a commitment to expand digital readiness to reach 30 million people in 30,000 institutions in 30 countries. This builds on the company’s recently announced 2030 goals and Global Impact Challenges that reinforce its commitment to making technology fully inclusive and expand digital readiness.</p>



<p>Intel’s corporate responsibility and positive global impact work is embedded in its purpose to create world-changing technology that enriches the lives of every person on Earth. By leveraging its position in the technology ecosystem, Intel can help customers and partners achieve their own aspirations and accelerate progress on key topics across the technology industry.</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/">Intel Launches First Artificial Intelligence Associate Degree Program</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-launches-first-artificial-intelligence-associate-degree-program/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel and National Science Foundation Invest in Wireless-Specific Machine Learning Edge Research</title>
		<link>https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/</link>
					<comments>https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 29 Jun 2020 06:12:56 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Foundation Invest]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[National]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Wireless-Specific]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9818</guid>

					<description><![CDATA[<p>Source: indiaeducationdiary.in Today, Intel and the National Science Foundation (NSF) announced award recipients of joint funding for research into the development of future wireless systems. The Machine <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/">Intel and National Science Foundation Invest in Wireless-Specific Machine Learning Edge Research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: indiaeducationdiary.in</p>



<p>Today, Intel and the National Science Foundation (NSF) announced award recipients of joint funding for research into the development of future wireless systems. The Machine Learning for Wireless Networking Systems (MLWiNS) program is the latest in a series of joint efforts between the two partners to support research that accelerates innovation with the focus of enabling ultra-dense wireless systems and architectures that meet the throughput, latency and reliability requirements of future applications. In parallel, the program will target research on distributed machine learning computations over wireless edge networks, to enable a broad range of new applications.</p>



<p>“Since 2015, Intel and NSF have collectively contributed more than $30 million to support science and engineering research in emerging areas of technology. MLWiNS is the next step in this collaboration and has the promise to enable future wireless systems that serve the world’s rising demand for pervasive, intelligent devices.”<br>– Gabriela Cruz Thompson, director of university research and collaborations at Intel Labs</p>



<p>Why It’s Important: As demand for advanced connected services and devices grows, future wireless networks will need to meet the challenging density, latency, throughput and security requirements these applications will require. Machine learning shows great potential to manage the size and complexity of such networks – addressing the demand for capacity and coverage while maintaining the stringent and diverse quality of service expected from network users. At the same time, sophisticated networks and devices create an opportunity for machine learning services and computation to be deployed closer to where the data is generated, which alleviates bandwidth, privacy, latency and scalability concerns to move data to the cloud.</p>



<p>“5G and Beyond networks need to support throughput, density and latency requirements that are orders of magnitudes higher than what current wireless networks can support, and they also need to be secure and energy-efficient,” said Margaret Martonosi, assistant director for computer and information science and engineering at NSF. “The MLWiNS program was designed to stimulate novel machine learning research that can help meet these requirements – the awards announced today seek to apply innovative machine learning techniques to future wireless network designs to enable such advances and capabilities.”</p>



<p>What Will Be Researched: Through MLWiNS, Intel and NSF will fund research with the goal of driving new wireless system and architecture design, increasing the utilization of sparse spectrum resources and enhancing distributed machine learning computation over wireless edge networks. Grant winners will conduct research across multiple areas of machine learning and wireless networking. Key focus areas and project examples include:</p>



<p>Reinforcement learning for wireless networks: Research teams from the University of Virginia and Penn State University will study reinforcement learning for optimizing wireless network operation, focusing on tackling convergence issues, leveraging knowledge-transfer methods to reduce the amount of training data necessary, and bridging the gap between model-based and model-free reinforcement learning through an episodic approach.</p>



<p>Federated learning for edge computing:</p>



<p>Researchers from the University of North Carolina at Charlotte will explore methods to speed up multi-hop federated learning over wireless communications, allowing multiple groups of devices to collaboratively train a shared global model while keeping their data local and private. Unlike classical federated learning systems that utilize single-hop wireless communications, multi-hop system updates need to go through multiple noisy and interference-rich wireless links, which can result in slower updates. Researchers aim to overcome this challenge by developing a novel wireless multi-hop federated learning system with guaranteed stability, high accuracy and a fast convergence speed by systematically addressing the challenges of communication latency, and system and data heterogeneity.</p>



<p>Researchers from the Georgia Institute of Technology will analyze and design federated and collaborative machine-learning training and inference schemes for edge computing, with the goal of increasing efficiency over wireless networks. The team will address challenges with real-time deep learning at the edge, including limited and dynamic wireless channel bandwidth, unevenly distributed data across edge devices and on-device resource constraints.</p>



<p>Research from the University of Southern California and the University of California, Berkeley will focus on a coding-centric approach to enhance federated learning over wireless communications. Specifically, researchers will work to tackle the challenges of dealing with non-independent and identically distributed data, and heterogeneous resources at the wireless edge, and minimizing upload bandwidth costs from users, while emphasizing issues of privacy and security when learning from distributed data.</p>



<p>Distributed training across multiple edge devices: Rice University researchers will work to train large-scale centralized neural networks by separating them into a set of independent sub-networks that can be trained on different devices at the edge. This can reduce training time and complexity, while limiting the impact on model accuracy.</p>



<p>Leveraging information theory and machine learning to improve wireless network performance: Research teams from the Massachusetts Institute of Technology and Virginia Polytechnic Institute and State University will collaborate to explore the use of deep neural networks to address physical layer problems of a wireless network. They will exploit information theoretic tools in order to develop new algorithms that can better address non-linear distortions and relax simplifying assumptions on the noise and impairments encountered in wireless networks.</p>



<p>Deep learning from radio frequency signatures: Researchers at Oregon State University will investigate cross-layer techniques that leverage the combined capabilities of transceiver hardware, wireless radio frequency (RF) domain knowledge and deep learning to enable efficient wireless device classification. Specifically, the focus will be on exploiting RF signal knowledge and transceiver hardware impairments to develop efficient deep learning-based device classification techniques that are scalable with the massive and diverse numbers of emerging wireless devices, robust against device signature cloning and replication, and agnostic to environment and system distortions.</p>



<p>About Award Winners and Project Descriptions: A full list of award winners and project descriptions can be found in “Intel and National Science Foundation Announce Future Wireless Systems Research Award Recipients.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/">Intel and National Science Foundation Invest in Wireless-Specific Machine Learning Edge Research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-and-national-science-foundation-invest-in-wireless-specific-machine-learning-edge-research/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel’s Sample Factory speeds up reinforcement learning training on a single PC</title>
		<link>https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/</link>
					<comments>https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Jun 2020 07:52:09 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[researchers]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9781</guid>

					<description><![CDATA[<p>Source: venturebeat.com In a preprint paper this week published on Arxiv.org, researchers at Intel describe Sample Factory, a system that achieves high throughput — higher than 105 environment frames per <a class="read-more-link" href="https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/">Intel’s Sample Factory speeds up reinforcement learning training on a single PC</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: venturebeat.com</p>



<p>In a preprint paper this week published on Arxiv.org, researchers at Intel describe Sample Factory, a system that achieves high throughput — higher than 10<sup>5 </sup>environment frames per second — in reinforcement learning experiments. In contrast to the distributed servers and hardware setups those experiments typically require, Sample Factory is optimized for single-machine settings, enabling researchers to achieve what the coauthors claim are “unprecedented” results in AI training for video games, robotics, and other domains.</p>



<p>Training AI software agents in simulation is the cornerstone of contemporary reinforcement learning research. But despite improvements in the sample efficiency of leading methods, most remain notoriously data- and computation-hungry. Performance has risen due to the increased scale of experiments, in large part. Billion-scale experiments with complex environments are now relatively commonplace, and the most advanced efforts have agents take trillions of actions in a single session.</p>



<p>Sample Factory targets efficiency with an algorithm called asynchronous proximal policy optimization, which aggressively parallelizes agent training and achieves throughput as high as 130,000 FPS (which here indicates environment frames per second) on a single-GPU commodity PC. It minimizes the idle time for all computations by associating each workload with one of three types of components: rollout workers, policy workers, and learners. These components communicate with each other using a fast queuing protocol and shared hardware memory. The queuing provides the basis for continuous and asynchronous execution, where the next computation step can be started immediately as long as there is something in the queue to process.</p>



<p>To be clear, Sample Factory doesn’t enable experiments that couldn’t be performed before. But it accelerates them so that they’re more practical on single-PC setups than before. At full throttle, even with multi-agent environments and large populations of agents, Sample Factory can generate and consume more than 1GB of data per second. A typical update to a model takes less than 1 millisecond.</p>



<p>In experiments on two PCs — one with a 10-core CPU and a GTX 1080 Ti GPU and a second with a server-class 36-core CPU and a single RTX 2080 Ti — the researchers evaluated Sample Factory’s performance on three simulators: Atari, VizDoom (a Doom-like game used for AI research), and DeepMind Lab (a Quake III-like environment). They report that the system outperformed the baseline methods in most of the training scenarios after between 700 to 2,000 environments, reaching at least 10,000 frames per second.</p>



<p>In one test, the researchers used Sample Factory to train an agent to solve a set of 30 environments simultaneously. In another, they trained eight agents in “duel” and “deathmatch” scenarios within VizDoom, after which the agents beat the in-game bots on the highest difficulty in 100% of matches. And in a third, they had eight agents battle against each other to accumulate 18 years of simulated experience, which enabled those agents to defeat scripted bots 78 times out of 100.</p>



<p>“We aim to democratize deep [reinforcement learning] and make it possible to train whole populations of agents on billions of environment transitions using widely available commodity hardware,” the coauthors wrote. “We believe this is an important area of research, as it can benefit any project that leverages model-free [reinforcement learning]. With our system architecture, researchers can iterate on their ideas faster, thus accelerating progress in the field.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/">Intel’s Sample Factory speeds up reinforcement learning training on a single PC</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intels-sample-factory-speeds-up-reinforcement-learning-training-on-a-single-pc/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel Powers Memory-Optimized Azure Virtual Machines (VMs) Featuring Deep Learning Boost Technology</title>
		<link>https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/</link>
					<comments>https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 23 Jun 2020 07:13:22 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Virtual Machines]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9710</guid>

					<description><![CDATA[<p>Source: aithority.com Intel’s Deep Learning Boost Technology (Intel DL Boost) is now the central feature of new general purpose and memory-optimized Azure Virtual Machines (VM). Microsoft has <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/">Intel Powers Memory-Optimized Azure Virtual Machines (VMs) Featuring Deep Learning Boost Technology</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: aithority.com</p>



<p>Intel’s Deep Learning Boost Technology (Intel DL Boost) is now the central feature of new general purpose and memory-optimized Azure Virtual Machines (VM). Microsoft has announced that the new Azure VMs are based on the second generation Xenon Platinum Cascade Lake (8272 CL) configuration, capable of running at 2.5 GHz and all-core turbo frequency of 3.4 GHz. Powered by Intel DL Boost, the Azure VMs also features other top-end technologies, including the Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Intel® Turbo Boost Technology 2.0, and Intel® Hyper-Threading Technology.</p>



<p>For users working with the v3 VMs, switching to v4 sizes will deliver a better price-per-core performance option.</p>



<p><strong>What is Intel Deep Learning Boost (DL Boost) Technology?</strong></p>



<p>Intel DL Boost Technology is a scalable embedded AI performance enhancer for complex IT workloads. It is fitted into the Intel Xeon Scalable processors to extend inference performance for Deep Learning workloads to handle the Vector Neural Network Instructions extension (VNNI). These VNNI AVX 512 are used in new-age AI applications such as Voice/Speech recognition, Object Classification, Language Translation and Image Processing, and much more.</p>



<p>With Intel’s DL Boost Technology at its core, Azure VMs will run a totally new line of families –</p>



<ul class="wp-block-list"><li>Azure Ddv4 and Ddsv4 and Edv4 and Edsv4; (general availability)</li><li>Azure Dv4 and Dsv4 and Ev4 and Esv4 (only preview).</li></ul>



<p>These VMs rely on remote disks without expanding on temporary local storage, impacting performance by up to 20% of local CPU usage compared to all previous versions of Azure VM families.</p>



<p><strong>Specific Features of Azure VMs Running on Intel DL Boost</strong></p>



<p>The new DDv4 and Ddsv4, and Edv4 and Edsv4 have a much larger SSD storage designed to amplify benefits from low latency and temporary storage requirements. High-speed local storage enable IT teams to manage caches or temporary files better and faster.</p>



<p>The previous generation of Azure VMs includes AV-series, B-series, DCv2-series and so on. These were deployed to protect various operations in CPU and GPU configuration, data management and coding facility, in addition to enhancing value proposition of the various general-usage workloads.</p>



<p>Apart from greater local storage, these VMs are also capable of offering better local disk IOPS for both Read and Write operations.</p>



<p>New VMs from Azure powered by the Intel Boost technology can balance memory-to-CPU performance up to 2400 GiB and 64 vCPUs. According to Microsoft Azure, these scenarios are ideal for development and testing, small to medium databases, and low-to-medium traffic web servers.</p>



<p>On the other hand, the Edv4 series includes 504 GiB of RAM, and also include local SSD storage (up to 2,400 GiB) meant for the operational management of the RDBs and in-memory analytics.</p>



<p>At the time of this announcement, Intel’s Jason Grebe, CVP Cloud and Enterprise, explained the deeper nuances of working with Azure VMs. Jason said, “The launch of Azure D-v4 and E-v4-series virtual machines further extends the Microsoft IaaS portfolio to meet the diverse needs of our customers. Powered by 2nd Generation Intel® Xeon Scalable Processors, these virtual machines offer optimized application performance for web and data services, desktop virtualization and business applications moving to Azure.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/">Intel Powers Memory-Optimized Azure Virtual Machines (VMs) Featuring Deep Learning Boost Technology</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-powers-memory-optimized-azure-virtual-machines-vms-featuring-deep-learning-boost-technology/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Millions of IoT devices at hacking risk globally: Report</title>
		<link>https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/</link>
					<comments>https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Jun 2020 07:21:03 +0000</pubDate>
				<category><![CDATA[Internet of things]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Internet of Things]]></category>
		<category><![CDATA[IoT devices]]></category>
		<category><![CDATA[JSOF]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9622</guid>

					<description><![CDATA[<p>Source: telecom.economictimes.indiatimes.com San Francisco: Security researchers have discovered serious vulnerabilities that could expose millions of Internet of Things (IoT) devices worldwide to hackers. The list of affected vendors includes HP, Schneider <a class="read-more-link" href="https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/">Millions of IoT devices at hacking risk globally: Report</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: telecom.economictimes.indiatimes.com</p>



<p>San Francisco: Security researchers have discovered serious vulnerabilities that could expose millions of Internet of Things (IoT) devices worldwide to hackers.</p>



<p>The list of affected vendors includes HP, Schneider Electric, Intel, Rockwell Automation, Caterpillar and Baxter.</p>



<p>According to JSOF, a boutique cybersecurity organization, the vulnerabilities dubbed &#8216;Ripple20&#8217; relate to the Treck TCP/IP stack, a TCP/IP protocol suite designed for embedded systems.</p>



<p>The vulnerability affects hundreds of millions of IoT devices that could potentially allow nefarious actors, including nation-states, to remote take-over of these devices, the organization said in a statement late Tuesday.</p>



<p>JSOF said it discovered the Treck vulnerability while doing a security analysis of a single device last fall and found that its TCP-IP stack contained hackable vulnerabilities.</p>



<p>The firm soon realised that the code wasn&#8217;t written by the device&#8217;s manufacturer, but rather came from Treck; that meant the bugs weren&#8217;t in a single device but everywhere underscoring how widely IoT flaws can propagate</p>



<p>The risks inherent in this situation are high.</p>



<p>&#8220;Data could be stolen off of a printer, an infusion pump behaviour changed or industrial control devices could be made to malfunction.</p>



<p>&#8220;An attacker could hide malicious code within embedded devices for years. One of the vulnerabilities could enable entry from outside into the network boundaries; and this is only a small taste of the potential risks,&#8221; the researchers explained.</p>



<p>JSOF said it has contacted every vendor of affected devices, and many of the companies have released software updates.</p>



<p>The organisation has been working with several organizations to coordinate the disclosure of the flaws.</p>
<p>The post <a href="https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/">Millions of IoT devices at hacking risk globally: Report</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/millions-of-iot-devices-at-hacking-risk-globally-report/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Intel, Microsoft Use Deep Learning To Detect Malware</title>
		<link>https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/</link>
					<comments>https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 02 Jun 2020 06:53:12 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[malware detection]]></category>
		<category><![CDATA[Microsoft]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9204</guid>

					<description><![CDATA[<p>Source: rtinsights.com Microsoft and Intel are collaborating on a research project that aims to detect malware threats through the application of deep learning techniques. The project, which <a class="read-more-link" href="https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/">Intel, Microsoft Use Deep Learning To Detect Malware</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: rtinsights.com</p>



<p>Microsoft and Intel are collaborating on a research project that aims to detect malware threats through the application of deep learning techniques.</p>



<p>The project, which has been ongoing for several months, published its first paper earlier this month. In it, the researchers demonstrated a technique that converts malware binary form into grayscale images, which are scanned by an image pattern recognition algorithm.</p>



<p>That algorithm, called STAMINA (STAtic Malware-as-Image Network Analysis), is then able to classify if the file is clean or infected. In tests, STAMINA achieved an accuracy of 99.07 percent, with 2.58 percent false positive rate.</p>



<p>“The results certainly encourage the use of deep transfer learning for the purpose of malware classification,” said Microsoft researchers Jugal Parikh and Marc Marino.</p>



<p>STAMINA one major drawback is its inefficiency with larger files. To save on time and not overload the algorithm, files are compressed into JPEG format, which can be ineffective for larger and more detailed images.</p>



<p>“STAMINA becomes less effective due to limitations in converting billions of pixels into JPEG images and then resizing them,” said Microsoft in a blog post.</p>



<p>That does not make it useless however, as most malware files are not large in size. If a file is large, the algorithm may be able to bounce it to a metadata-based model, which the researchers say is a more optimal solution for large files.</p>



<p>Intel and Microsoft said they will continue to evaluate different deep learning models for malware detection, starting with a hybrid model with larger datasets.</p>
<p>The post <a href="https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/">Intel, Microsoft Use Deep Learning To Detect Malware</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/intel-microsoft-use-deep-learning-to-detect-malware/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft Azure, Intel Keep Cloud Data Confidential</title>
		<link>https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/</link>
					<comments>https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 28 Apr 2020 09:08:52 +0000</pubDate>
				<category><![CDATA[Microsoft Azure Machine Learning]]></category>
		<category><![CDATA[cloud data]]></category>
		<category><![CDATA[Intel]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8392</guid>

					<description><![CDATA[<p>Source: sdxcentral.com Microsoft today made available Azure confidential computing built on Intel hardware for enterprise cloud customers. It follows a similar IBM Cloud move last week. The <a class="read-more-link" href="https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/">Microsoft Azure, Intel Keep Cloud Data Confidential</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: sdxcentral.com</p>



<p>Microsoft today made available Azure confidential computing built on Intel hardware for enterprise cloud customers.</p>



<p>It follows a similar IBM Cloud move last week.</p>



<p>The new Azure DCsv2-Series allows companies to process data in the cloud in hardware-based secure enclaves called trusted execution environments (TEEs). Intel calls its TEEs Software Guard Extensions (SGEs). This hardware-based technology isolates specific application code and data to run in private regions of memory, thus protecting select code and data from disclosure or modification even at the OS and hypervisor level.</p>



<p>Encrypting data while it’s being processed in memory “helps to isolate the data from other applications or tenants, the service provider, rogue administrators, and even from malicious code with root privileges,” wrote Jason Grebe, VP and GM of Intel’s Cloud and Enterprise Solutions Group in a blog post.</p>



<h3 class="wp-block-heading">Confidential Computing Heats Up</h3>



<p>Both Intel and Microsoft are also founding members of the Confidential Computing Consortium. The Linux Foundation formed the open source group last August, and at its launch Intel contributed its SGX software development kit (SDK) to the project. Meanwhile, Microsoft contributed Open Enclave SDK, which is an open source framework that allows developers to build TEE applications using a single enclaving abstraction.</p>



<p>The two companies have been working on Azure confidential computing for several years, and a little over two years ago they rolled out the first public preview of the service. Microsoft claims Azure was the first public cloud to encrypt data while in use, and its engineers helped design the SGX technology used in Intel’s Xeon chips.</p>



<p>At Intel’s Security Day event in February, Senior Director of Microsoft Azure Security Scott Woodgate joined Intel execs on stage to discuss new use cases that confidential computing enables. These include multi-party or federated machine learning. During a later interview at RSA conference, Woodgate said several Microsoft customers use multi-party machine learning to detect banking fraud and money laundering.</p>



<p>IBM is also working on confidential computing use cases with its banking and health care customers, said Nataraj Nagaratnam, CTO and director of cloud security for IBM’s Cloud and Cognitive Software business unit.</p>



<p>That cloud provider last week announced that IBM Cloud Data Shield now supports containerized applications on IBM Cloud Kubernetes and RedHat OpenShift using Intel SGX hardware and Fortanix encryption technology.</p>
<p>The post <a href="https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/">Microsoft Azure, Intel Keep Cloud Data Confidential</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microsoft-azure-intel-keep-cloud-data-confidential/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
