<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Laboratory Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/laboratory/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/laboratory/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 22 Oct 2019 07:59:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Deep learning with point clouds</title>
		<link>https://www.aiuniverse.xyz/deep-learning-with-point-clouds/</link>
					<comments>https://www.aiuniverse.xyz/deep-learning-with-point-clouds/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 22 Oct 2019 07:59:40 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Clouds]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Laboratory]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Robotics]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4790</guid>

					<description><![CDATA[<p>Source: news.mit.edu If you’ve ever seen a self-driving car in the wild, you might wonder about that spinning cylinder on top of it.&#160; It’s a “lidar sensor,” <a class="read-more-link" href="https://www.aiuniverse.xyz/deep-learning-with-point-clouds/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-with-point-clouds/">Deep learning with point clouds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source:  news.mit.edu </p>



<p>If you’ve ever seen a self-driving car in the wild, you might wonder about that spinning cylinder on top of it.&nbsp;</p>



<p>It’s a “lidar sensor,” and it’s what allows the car to navigate the world. By sending out pulses of infrared light and measuring the time it takes for them to bounce off objects, the sensor creates a “point cloud” that builds a 3D snapshot of the car’s surroundings.&nbsp;</p>



<p>Making sense of raw point-cloud data is difficult, and before the age of machine learning it traditionally required highly trained engineers to tediously specify which qualities they wanted to capture by hand. But in a new series of papers out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), researchers show that they can use deep learning to automatically process point clouds for a wide range of 3D-imaging applications.</p>



<p>“In computer vision and machine learning today, 90 percent of the advances deal only with two-dimensional images,” says MIT Professor Justin Solomon, who was senior author of the new series of papers spearheaded by PhD student Yue Wang. “Our work aims to address a fundamental need to better represent the 3D world, with application not just in autonomous driving, but any field that requires understanding 3D shapes.”&nbsp;</p>



<p>Most previous approaches haven’t been especially successful at capturing the patterns from data that are needed to get meaningful information out of a bunch of 3D points in space. But in one of the team’s papers, they showed that their “EdgeConv” method of analyzing point clouds using a type of neural network called a dynamic graph convolutional neural network allowed them to classify and segment individual objects.&nbsp;</p>



<p>“By building ‘graphs’ of neighboring points, the algorithm can capture hierarchical patterns and therefore infer multiple types of generic information that can be used by a myriad of downstream tasks,” says Wadim Kehl, a machine learning scientist at Toyota Research Institute who was not involved in the work.&nbsp;</p>



<p>In addition to developing EdgeConv, the team also explored other specific aspects of point-cloud processing. For example, one challenge is that most sensors change perspectives as they move around the 3D world; every time we take a new scan of the same object, its position may be different than the last time we saw it. To merge multiple point clouds together into a single detailed view of the world, you need to align multiple 3D points in a process called “registration.”&nbsp;</p>



<p>Registration is vital for many forms of imaging, from satellite data to medical procedures. For example, when a doctor has to take multiple magnetic resonance imaging scans of a patient over time, registration is what makes it possible to align the scans to see what’s changed.&nbsp;</p>



<p>“Registration is what allows us to integrate 3D data from different sources into a common coordinate system,” says Wang. “Without it, we wouldn’t actually be able to get as meaningful information from all these methods that have been developed.”</p>



<p>Solomon and Wang’s second paper demonstrates a new registration algorithm called “Deep Closest Point” (DCP) that was shown to better find a point cloud’s distinguishing patterns, points, and edges (known as “local features”) in order to align it with other point clouds. This is especially important for such tasks as enabling self-driving cars to situate themselves in a scene (“localization”), as well as for robotic hands to locate and grasp individual objects.</p>



<p>One limitation of DCP is that it assumes we can see an entire shape instead of just one side. This means it can’t handle the more difficult task of aligning partial views of shapes (known as “partial-to-partial registration”). As a result, in a third paper the researchers presented an improved algorithm for this task that they call the Partial Registration Network (PRNet).&nbsp;</p>



<p>Solomon says that existing 3D data tends to be “quite messy and unstructured compared to 2D images and photographs.” His team sought to figure out how to get meaningful information out of all that disorganized 3D data without the controlled environment that a lot of machine learning technologies now require.</p>



<p>A key observation behind the success of DCP and PRNet is the idea that a critical aspect of point-cloud processing is context. The geometric features on point cloud A that suggest the best ways to align it to point cloud B may be different from the features needed to align it to point cloud C. For example, in partial registration, an interesting part of a shape in one point cloud may not be visible in the other — making it useless for registration.</p>



<p>Wang says that the team’s tools have already been deployed by many researchers in the computer vision community and beyond. Even physicists are using them for an application the CSAIL team had never considered: particle physics. </p>



<p>Moving forward, the researchers hope to use the algorithms on real-world data, including data gathered from self-driving cars. Wang says they also plan to explore the potential of training their systems using self-supervised learning, to minimize the amount of human annotation needed.</p>



<p>Solomon and Wang were the two sole authors of the DCP and PRNet papers. Their co-authors on the EdgeConv paper were research assistant Yongbin Sun and Professor Sanjay Sarma of MIT, alongside postdoc Ziwei Liu of University of California at Berkeley and Professor Michael M. Bronstein of Imperial College London.&nbsp;</p>



<p>The projects were supported, in part, by the U.S. Air Force, the U.S. Army Research Office, Amazon, Google Research, IBM, the National Science Foundation, the Skoltech-MIT Next Generation Program, and the Toyota Research Institute.</p>
<p>The post <a href="https://www.aiuniverse.xyz/deep-learning-with-point-clouds/">Deep learning with point clouds</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/deep-learning-with-point-clouds/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>KAFB: Air Force Research Laboratory To Rendezvous And Inspect Malfunctioning S5 Satellite</title>
		<link>https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/</link>
					<comments>https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 21 Oct 2019 09:22:21 +0000</pubDate>
				<category><![CDATA[Mycroft]]></category>
		<category><![CDATA[Air Force]]></category>
		<category><![CDATA[Environment]]></category>
		<category><![CDATA[Laboratory]]></category>
		<category><![CDATA[Mycroft satellite]]></category>
		<category><![CDATA[National]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[satellite]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4774</guid>

					<description><![CDATA[<p>Source: ladailypost.com KIRTLAND AIR FORCE BASE&#160;―&#160;The Air Force Research Laboratory will begin maneuvers today, Oct. 20, as the first-ever inspection mission to support real-time on-orbit spacecraft anomaly <a class="read-more-link" href="https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/">KAFB: Air Force Research Laboratory To Rendezvous And Inspect Malfunctioning S5 Satellite</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: ladailypost.com</p>



<p>KIRTLAND AIR FORCE BASE&nbsp;―&nbsp;The Air Force Research Laboratory will begin maneuvers today, Oct. 20, as the first-ever inspection mission to support real-time on-orbit spacecraft anomaly resolution operations.&nbsp;</p>



<p>This effort will be a rendezvous between the experimental Mycroft satellite and a second experimental AFRL satellite called the Small Satellite Space Surveillance System, or S5. The S5, launched Feb. 22, 2019, is a small satellite designed to test affordable SmallSat space situational awareness constellation technologies.&nbsp;</p>



<p>AFRL has experienced communication challenges with the S5 satellite and has had no communication with S5 since March 2019. Operators confirm that the spacecraft is alive and maintaining solar power by tracking the sun, but without communications S5 cannot perform its experiments.&nbsp;Mycroft is an AFRL-developed SmallSat launched with the EAGLE satellite April 14, 2018.</p>



<p>Mycroft separated from EAGLE and drifted about 35 kilometers away before transiting carefully back to within a few kilometers of EAGLE. It has performed space situational awareness, or SSA, and satellite inspection experiments over the past 18 months. The Mycroft experiment is aimed at improving autonomous rendezvous and proximity operations, or RPO, SSA, satellite inspection and characterization, and autonomous navigation technologies.&nbsp;</p>



<p style="text-align:left">Mycroft satellite operators will initiate a series of maneuvers to rendezvous with S5 near 6 degrees East longitude at Geosynchronous Orbit to support anomaly resolution efforts. EAGLE will also maneuver into the vicinity of the RPO to observe the inspection from a safe distance. Mycroft will inspect the S5 satellite and provide operators with verification of the fully-deployed solar array and of the sun pointing orientation. Mycroft will then examine the exterior of the S5 spacecraft to search for damaged components such as the solar array and antennas.&nbsp;</p>



<p>The Mycroft-S5 RPO will occur in stages over a period of several weeks, demonstrating the utility of inspection and characterization capabilities in a real-world satellite recovery. AFRL is planning to transition operations to Air Force Space Command later this year. </p>
<p>The post <a href="https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/">KAFB: Air Force Research Laboratory To Rendezvous And Inspect Malfunctioning S5 Satellite</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/kafb-air-force-research-laboratory-to-rendezvous-and-inspect-malfunctioning-s5-satellite/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
