<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Building Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/building/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/building/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 09 Jun 2021 05:37:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Building A Foundational Map Of Humanity Using Machine Learning</title>
		<link>https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/</link>
					<comments>https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 09 Jun 2021 05:37:14 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[Foundational]]></category>
		<category><![CDATA[humanity]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Map]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14113</guid>

					<description><![CDATA[<p>Source &#8211; https://martechseries.com/ Geospatial data and analytics company&#160;Fraym&#160;announced a Series B financing to further scale their AI/ML software for mapping humanity. Fraym is the preeminent global provider <a class="read-more-link" href="https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/">Building A Foundational Map Of Humanity Using Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://martechseries.com/</p>



<p>Geospatial data and analytics company&nbsp;Fraym&nbsp;announced a Series B financing to further scale their AI/ML software for mapping humanity.</p>



<p>Fraym is the preeminent global provider of geospatial data for understanding population dynamics. Dozens of data-driven organizations like Mastercard, the World Bank, Department of Defense, and USAID rely on Fraym’s foundational data to drive impact and mission success. Over the past 5 years, the company has:</p>



<ul class="wp-block-list"><li>Mapped hundreds of distinct population characteristics covering over 3.2 billion people and 2 billion square kilometers — enough to cover the entire globe nearly 5 times over.</li><li>Informed over&nbsp;$35 billion&nbsp;in programmatic and operational missions, spanning the design, implementation, and monitoring of strategic activities.</li></ul>



<p>“Fraym has executed on ambitious product and customer success goals year after year. This raise will further accelerate our mission of mapping humanity and deliver a future where solutions around the world are built on hyper-local, spatially standardized data,”&nbsp; said Fraym CEO and Co-Founder,&nbsp;Ben Leo.</p>



<p>This&nbsp;$7 million&nbsp;in additional funding, largely from Fraym’s existing investors, will support further capital-efficient development of cutting-edge product and delivery solutions. All of which trailblaze a new frontier in location-based data about people—one that protects individual privacy.</p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/">Building A Foundational Map Of Humanity Using Machine Learning</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-a-foundational-map-of-humanity-using-machine-learning/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building a sonar sensor array with Arduino and Python</title>
		<link>https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/</link>
					<comments>https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 20 Feb 2021 06:10:42 +0000</pubDate>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[Arduino]]></category>
		<category><![CDATA[array]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[sensor]]></category>
		<category><![CDATA[sonar]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12975</guid>

					<description><![CDATA[<p>Source &#8211; https://towardsdatascience.com/ Estimate distance and position of solid objects using multiple low-cost ultrasound sensors. In this article we are going to build from scratch a sonar <a class="read-more-link" href="https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/">Building a sonar sensor array with Arduino and Python</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://towardsdatascience.com/</p>



<p>Estimate distance and position of solid objects using multiple low-cost ultrasound sensors.</p>



<p id="1abc">In this article we are going to build from scratch a sonar array based on the cheap and popular HC-SR04 sensor. We will use an Arduino microcontroller to drive and read the sensors and to communicate to a host computer using serial communication. Here is the working code for the full project however I recommend you to follow the steps in the article to understand how it works and to customize it for your needs.</p>



<p id="8dfb">The HC-SR04 is a very popular ultrasound sensor usually used in hobby electronics to build cheap distance sensors for obstacle avoidance or object detection. It has an ultrasound transmitter and receptor used to measure the time-of-flight of an ultrasonic wave signal bouncing against a solid object.</p>



<p id="af16">If the speed of sound is roughly 343 m/s at a room temperature of 20 celsius degrees. The distance to an object would the half of the time it takes the ultrasound wave from the transmitter to the receptor:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>distance = 343 / ( time/2 )</p></blockquote>



<p id="9d4a">However the HC-SR04 sensor is very inaccurate and will give you a very rough and noisy distance estimate. There are environmental factors like temperature and humidity that will affect the ultrasonic wave speed and the solid object material and angle of incidence will as well deteriorate the distance estimation. There are ways to improve the raw readings as we will learn later but in general terms ultrasound sensors should be used only as the last resort to avoid a close collision or to detect a solid object with a low distance resolution. But they are not good navigational or distance estimation sensors. For that we could use more expensive sensors as LiDar or a laser rangefinder.</p>



<p id="6e09">I want to use this sonar array to detect nearby obstacles in front of my Raspberry Pi robot Rover4Wd (this project will be covered in another article). The sensor’s effective angle of detection is around 15 degrees so in order to cover a bigger area in front of the robot I want to use 5 sensors in total using an arc shape:</p>



<p id="8083">The benefit of this setup is that we can not only estimate the distance of the obstacle in front of the robot but also the position (roughly) of the object relative to the robot.</p>



<p id="c6e1">The HC-SR04 sensor has only four pins. Two for ground and +5v, and&nbsp;<strong><em>Echo</em></strong>&nbsp;and&nbsp;<strong><em>Trigger</em></strong>&nbsp;pins. To use the sensor we need to trigger the signal using the&nbsp;<strong><em>Trigger</em></strong>&nbsp;pin and measure the time until is received via the&nbsp;<strong><em>Echo</em></strong>&nbsp;pin. As we don’t use the Echo and Trigger pins at the same time they can share the same cable to connect to an Arduino digital pin.</p>



<p id="d56d">For this project we are going to use an Arduino Nano which is small and broadly available. There are tons of non-official compatible clones for under $3 per unit as well.</p>



<p id="5ea9">For this breadboard setup we have connected both&nbsp;<strong><em>Trig</em></strong>&nbsp;and&nbsp;<strong><em>Echo&nbsp;</em></strong>pins to a single digital pin in Arduino. We are going to use D12, D11, D10, D9 and D8 pins for sending and receiving signals. This hardware setup is only limited by the microcontroller’s available digital pins but it can be expanded further using multiplexing where one pin can be shared by multiple sensors but only one sensor at the time.</p>



<p id="3a81">Traditionally this will be the sequential workflow we would need to manage to poll sensors one by one:</p>



<ol class="wp-block-list"><li>Trigger one sensor</li><li>Receive the echo</li><li>Calculate distance using the duration of the previous steps</li><li>Communicate measurement using the serial port</li><li>Process next sensor</li></ol>



<p id="70a7">However we are going to use a ready available Arduino library called NewPing that allows you to ping multiple sensors minimizing the delay between sensors. This will help us to measure the distance from all 5 sensors several times per second at the same time (almost). The resulting workflow would look like this:</p>



<ol class="wp-block-list"><li>Trigger and echo all sensors async (but sequentially)</li><li>When a sensor is done calculate distance</li><li>When all sensors are done for the current cycle, communicate readings from all sensors using the serial port</li><li>Start a new sensor reading cycle</li></ol>



<p id="1ba4">The implementation is very straightforward and heavily commented in the code. Feel free to take a look at the full code.</p>



<p></p>



<p>I want to put special focus on the serial communication part when all the sensors are done with the distance measurement:</p>



<p id="5aa9">Originally I wanted to send sensor readings via serial using strings however I realized the message will be big and harder to parse on the host side of the project. In order to improve speed and lower delay in the readings I decided to switch to a simple format using a message of 5 bytes:</p>



<p id="caa6"><strong>Byte 1:</strong>&nbsp;Character ‘Y’<br><strong>Byte 2:</strong>&nbsp;Character ‘Y’<br><strong>Byte 3:</strong>&nbsp;Sensor index [0–255]<br><strong>Byte 4:</strong>&nbsp;High-order byte of the measured distance (as an unsigned integer)<br><strong>Byte 5:</strong>&nbsp;Low-order byte of the measured distance (as an unsigned integer)</p>



<p id="e421">Byte 1 and 2 will be just the message header to decide where a new message starts when we will be reading the incoming serial bytes. This approach is very similar to what the TF-Luna LiDar sensor is doing to communicate to the host computer.</p>



<p id="eef6">In the host side we will use Python 3 to connect to the Arduino micro-controller via serial port and read the incoming bytes as fast as we can. The ideal setup will be to use a UART port in the host computer but just serial USB will do the job. The full code of the Python script is here.</p>



<p id="bfe5">There are several interesting things to note, first we need to read the serial on a different thread so we won’t miss any incoming messages while we are processing the sensor reading or doing different stuff:</p>



<p id="bef5">Secondly, we need to find the start of the message with the header ‘YY’ to start reading the sensors. As the Arduino controller doesn’t wait a host to connect to the serial port we may connect and read a partial message that will be discarded. It may take and additional second or two to get in sync with the micro-controller messages.</p>



<p id="ee21">Thirdly, we are smoothing the measurements with a simple moving average to avoid some noise. In this case we are just using a window of two measurements because we need the distance to be updated very fast to avoid hitting a close obstacle with the robot Rover4WD. But you can adjust it to a bigger window depending on you requirements. Bigger window cleaner but slow changing, smaller window noisier but fast changing.</p>



<p id="d91a">What are the next steps? This project is ready to be integrated in a robotic/electronic project. In my case I’m using a Ubuntu 20.10 with ROS2 in a Raspberry Pi 4 to control my robot Rover4WD. The next step for me would be to build a ROS package to process the measurements into detected obstacles and to publish&nbsp;<em>transform</em>&nbsp;messages that will be incorporated into the bigger navigation framework using sensor fusion.</p>



<p id="04fb">As always let me know if you have any question or comment to improve the quality of this article. Thank you and keep enjoying your projects!</p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/">Building a sonar sensor array with Arduino and Python</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-a-sonar-sensor-array-with-arduino-and-python/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine Learning Simplified: Building an Understanding</title>
		<link>https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Feb 2021 05:55:59 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[Self-learning]]></category>
		<category><![CDATA[Simplified]]></category>
		<category><![CDATA[Understanding]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12913</guid>

					<description><![CDATA[<p>Source &#8211; https://www.cmswire.com/ Artificial intelligence (AI) and machine learning (ML) are positioned to disrupt the way we live and work, even the way we interact and think. <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/">Machine Learning Simplified: Building an Understanding</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.cmswire.com/</p>



<p>Artificial intelligence (AI) and machine learning (ML) are positioned to disrupt the way we live and work, even the way we interact and think. Machine learning is a core sub-area of AI. It makes computers get into a self-learning mode without explicit programming.</p>



<p>At this point, most organizations are still approaching ML as a technology in the realm of research and exploration. In this first article of a series, we delve deeper into the world of machine learning and its applications. The following articles will focus on building an ML implementation plan. In doing so we not only understand the concepts behind the technology, but also why it can make the difference between keeping up with competition or falling further behind.</p>



<h2 class="wp-block-heading">What Is Machine Learning?</h2>



<p>Gartner defines machine learning as:&nbsp;“Advanced learning algorithms composed of many technologies (such as deep learning, neural networks and natural language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information.”</p>



<p>Machine learning is the process of teaching computers to develop intuitive knowledge and understanding through the use of repetitive algorithms and patterns. Machine learning in lay-man&#8217;s terms is the process of schooling a repetitive activity to a dumb system that needs to develop some innate intelligence. The goal is to feed the system large amounts of data so it learns from each pattern and its variations, so it can eventually be able to identify the pattern and its variants on its own. The advantage a machine has over the human mind here is its ability to ingest and process large amounts of data. The human brain, although limitless in its capacity to ingest data, may not be able to process it at the same time and can only recall a limited set at one time.</p>



<p>There are three key types of machine learning: supervised, unsupervised and reinforced.</p>



<ul class="wp-block-list"><li><strong>Supervised Learning:</strong>Is the most prevalent form of machine learning today. In this kind of learning the data is labeled to tell the machine exactly what patterns it should look for. This is the kind of learning used by Netflix or Amazon when they look for similar shows to watch or similar products to shop for.</li><li><strong>Unsupervised Learning</strong>: Requires no labels for any of the input data. The machine just looks for whatever patterns it can find. The goal here is to introduce the algorithm to multiple groups/types of information and then establish labels based on what is “learned” by the algorithm. Unsupervised learning algorithms aren&#8217;t designed to single out specific types of data, they simply look for data that can be grouped by similarities, or for anomalies that stand out. It is akin to letting a child look at different objects and then classify them according to color, function, entertainment value, etc. Unsupervised algorithms are not as popular as supervised ones, however with the increasing use of ML in cybersecurity, operational improvement and automation etc. their applicability has increased. Unsupervised learning can in fact also be used to create and label data for supervised learning.</li><li><strong>Reinforced Learning:</strong>Is the latest frontier of machine learning and the least explored in terms of applicability as well as usage. Expectations are we&#8217;ll see a tremendous increase in reinforced learning as computing power increases and data volumes to feed into existing algorithms also increase. A reinforcement algorithm learns through measuring various aspects of data provided to it and then starts replicating these behaviors. It is similar to rewarding or punishing a child for its behavior. It is this kind of learning used for gaming such as Google’s AlphaGo, the program that famously beat the best human players in the complex game of Go.</li></ul>



<p>Other aspects of machine learning include neural networks and deep learning.</p>



<p><strong>Neural networks</strong>&nbsp;have been studies for a long time. These algorithms endeavor to recognize the underlying relationships in data, just the way the human brain operates.</p>



<p><strong>Deep learning</strong>&nbsp;is a class of machine learning algorithms that involves multiple layers of neural networks where the output of one network becomes the input to another.</p>



<p>The key to understanding machine learning is to understand the power of data. These algorithms work by finding patterns in massive amounts of data. This data, encompasses a lot of things—numbers, words, images, videos, sound files etc. Any data or meta data that can be digitally stored, can be fed into a machine-learning algorithm.</p>



<h2 class="wp-block-heading">Applications of Machine Learning</h2>



<p>Machine learning, in conjunction with deep learning, have a wide variety of applications in our home and businesses today. It is currently used in everyday services such as recommendation systems like those on Netflix and Amazon; voice assistants like Siri and Alexa; car technology in parking assist and preventing accidents. Deep learning is already heavily used in autonomous vehicles and facial recognition systems. As the technology matures and receives widespread acceptance, we expect to see its applicability grow in these areas:</p>



<ul class="wp-block-list"><li>Medical diagnosis and personalized medicine.</li><li>Education and training, especially in the use of educational software for people with disabilities.</li><li>Weather and storm prediction systems.</li><li>Sensor technology.</li><li>Building efficiencies into our agricultural, supply chain and maintenance systems.</li><li>Fraud detection and market predictions.</li><li>Speech and image recognition.</li></ul>



<p>And many more ….</p>



<h2 class="wp-block-heading">Machine Learning Is Here to Stay</h2>



<p>The availability of widespread computing power though the use of cloud technologies along with an increasing volume of readily available data has driven a number of advancements in the field of AI and ML. Organizations need to first build an understanding of the technology itself, collaborate on building a vision for using the technology internally and then build an implementation plan collaboratively between business and IT. In part two of this ML series we will focus on building a vision and implementation plan.</p>



<h2 class="wp-block-heading">About the Author</h2>



<p>Geetika Tandon is a senior director at Booz Allen Hamilton, a management and technology consulting firm. She was born in Delhi, India, holds a Bachelors in architecture from Delhi University, a Masters in architecture from the University of Southern California and a Masters in computer science from the University of California Santa Barbara.</p>



<p><em>The views and opinions expressed in these articles are those of the author and do not necessarily reflect the official policy or position of her employer.</em></p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/">Machine Learning Simplified: Building an Understanding</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-simplified-building-an-understanding/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>BUILDING CONSCIOUS ARTIFICIAL INTELLIGENCE: HOW FAR ARE WE AND WHY?</title>
		<link>https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/</link>
					<comments>https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 27 Jan 2021 08:51:22 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[CONSCIOUS]]></category>
		<category><![CDATA[Far]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12544</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ People are paranoid about Artificial Intelligence becoming self-conscious and posing a threat to humankind, but will it happen soon? The Internet has been replete <a class="read-more-link" href="https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/">BUILDING CONSCIOUS ARTIFICIAL INTELLIGENCE: HOW FAR ARE WE AND WHY?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<p><em>People are paranoid about Artificial Intelligence becoming self-conscious and posing a threat to humankind, but will it happen soon?</em></p>



<p>The Internet has been replete with news headlines about GPT-3 writing articles, Google’s Neural Network creating eerie artwork, artificial intelligence (AI) models creating music and what not. While these may seems quite intriguing for a tech enthusiast, for an average person, it may be overwhelming. Not only he shall be worried about ever-increasing capabilities of artificial intelligence, it also births fear to AI and robots dominating humans – as portrayed in dystopian movies. Hence, all these milestones achieved by AI begs the question – Will artificial intelligence be conscious someday?</p>



<p>Artificial intelligence tries to solve real-world problems by simulating human brain intelligence to perform the assigned task. Generally, it can be categorized into two distinct types: Weak AI and Strong AI. Weak AI (or Artificial Narrow Intelligence) is designed to solve only a particularly specified problem, like recognizing text in a photo or processing data. In contrast, Strong AI (or Artificial General Intelligence), is a cohort of artificial intelligence algorithms powered with intelligence and self-awareness. This form of artificial intelligence has the capacity to understand or learn any intellectual task that a human being can. Currently, AI purists are convinced that the endgame of Strong AI is self-conscious.</p>



<p>Typically the artificial intelligence models and algorithms which are characterized by repetitive learning and limited memory belong to weak AI. Even the most complex applications of AI that leverages machine learning and deep learning to teach itself falls under this category. So, the news we often come across about the latest artificial intelligence innovations also belong to weak AI.</p>



<p>However, here’s the thing, while building 100% conscious artificial intelligence may prove to be a deadly gamble, experts have voiced different opinions on the same. Some argue that scientists need to have an entirely new calculus to create totally conscious artificial intelligence, as till now we have been building AI models that are good at ‘memorizing’ instead of thinking rationally like us. Whereas some say that either conscious artificial intelligence is probable in next few decades or totally impossible. Nevertheless, one thing is certain that currently we lack in understanding how to truly define a self-conscious artificial intelligence system.</p>



<p>Enigma developer, Alan Turing, was one of the first person who raised question on will artificial intelligence be ever conscious. He had created a test in which a person talks to a robot without being told it’s a robot. In case, the person cannot discern after the conversation, the robot has passed the test. Unfortunately, this test is also full of loopholes. Meanwhile, consciousness is not a measurable phenomenon. Even if we perfectly crack the black box problem of artificial intelligence, we are far from understanding how to quantify and calibrate ‘self-consciousness’. Also, it is not necessary that computational capabilities and consciousness will go in hand in hand.</p>



<p>Currently, consciousness is limited to carbon substrates only or the ‘living beings’ – who themselves attained it through evolution. Simultaneously, scientists are wondering if silicon-based artificial intelligence, is even capable of consciousness. This is because lack of carbon substrates need not imply lack of consciousness. To illustrate this, let’s consider the following argument:</p>



<ol class="wp-block-list"><li>Human can breathe using their lungs</li><li>Fish don’t have lungs</li></ol>



<p>Therefore, fish don’t breathe!</p>



<p>In reality, we know that while fish may not have lungs, they do breathe using ‘gills’. Hence, consciousness in artificial intelligence is still an open but vague question. And since, conscious is not a subjective quality for AI, the definition will remain ambiguous.</p>



<p>At the same time, experts believe that consciousness will enable artificial intelligence to accept new information, store and retrieve old information and carry cognitive processing of it all into perceptions and actions. Yet we cannot ignore the possibilities of misjudgment and bias due to conscious thinking.</p>



<p>Most of the artificial intelligence models of today, function after being trained by continuous loop of data feed, where it learns by following same set of commands (similar to rote learning by humans). Though if presented with new situations and simulations they will fail. For instance, if self-driving cars are trialed in India, we will hear more of ‘accidents’ due to these autonomous vehicles. So, first we must focus on enabling AI perceive how to take cognitive decisions, then we can move ahead with conscious artificial intelligence.</p>
<p>The post <a href="https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/">BUILDING CONSCIOUS ARTIFICIAL INTELLIGENCE: HOW FAR ARE WE AND WHY?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-conscious-artificial-intelligence-how-far-are-we-and-why/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</title>
		<link>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/</link>
					<comments>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 24 Jan 2020 08:11:29 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Image Resolution]]></category>
		<category><![CDATA[Medical Imaging]]></category>
		<category><![CDATA[Models]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6356</guid>

					<description><![CDATA[<p>Source: pubs.rsna.org Deep learning with convolutional neural networks (CNNs) has shown tremendous success in classifying images, as we have seen with the ImageNet competition (1), which consists <a class="read-more-link" href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: pubs.rsna.org</p>



<p>Deep learning with convolutional neural networks (CNNs) has shown tremendous success in classifying images, as we have seen with the ImageNet competition (1), which consists of millions of everyday color images, such as animals, vehicles, and natural objects. For example, recent artificial intelligence (AI) systems have achieved a top-five accuracy (correct answer within the top five predictions) of greater than 96% on the ImageNet competition (2). To achieve such, computer vision scientists have generally found that deeper networks perform better, and as a result, modern AI architectures frequently have greater than 100 layers (2).</p>



<p>Because of the sheer size of such networks, which contain millions of parameters, most AI solutions use significantly downsampled images. For example, the famous AlexNet CNN that won ImageNet in 2012 used an input size of 227 × 227 pixels (1), which is a fraction of the native resolution of images taken by cameras and smartphones (usually greater than 2000 pixels in each dimension). Lower-resolution images are used for a variety of reasons. First, smaller images are easier to distribute across the Web, as ImageNet in itself is approximately 150 GB of data. Second, the task of identifying common objects such as planes or cars can be readily discerned at lower resolutions. Third, downsampled images make it easier and much faster to train deep neural networks. Finally, using lower-resolution images may lead to increased generalizability or less overfitting of deep learning models that focus on important high-level features.</p>



<p>Given the success of deep learning in general image classification, many researchers have applied the same techniques used in the ImageNet competitions to medical imaging (3). With chest radiographs, for example, researchers have downsampled the input images to about 256 pixels in each dimension from original images with more than 2000 pixels in each dimension. Nevertheless, relatively high accuracy has been reported for detection on chest radiographs of some conditions, including tuberculosis, pleural effusion, atelectasis, and pneumonia (4,5).</p>



<p>However, subtle radiologic findings, such as pulmonary nodules, hairline fractures, or small pneumothoraces, are less likely to be visible at lower resolutions. As such, the optimal resolution for detecting such abnormalities using CNNs is an important research question. For example, in the 2017 Radiological Society of North America competition for determining bone age on skeletal radiographs (6), many competitors used an input size of 512 pixels or greater. For the DREAM (Dialogue for Reverse Engineering Assessments and Methods) challenge of classifying screening mammograms, resolutions of up to 1700 × 2100 pixels were used in top solutions (7). Recently, for the Society of Imaging Informatics in Medicine and American College of Radiology Pneumothorax Challenge (8), many top entries used an input size of up to 1024 × 1024 pixels.</p>



<p>In their article, “The Effect of Image Resolution on Deep Learning in Radiography,” Sabottke and Spieler (9) address that important question using the public ChestX-ray14 dataset from the National Institutes of Health, which consists of more than 100 000 chest radiographs stored as 8-bit gray-scale images at a resolution of 1024 × 1024 pixels (10). These radiographs have been labeled with 14 conditions including normal, lung nodule, pneumothorax, emphysema, and cardiomegaly (10). The authors used two popular deep CNNs, ResNet 34 and DenseNet 121, and analyzed their models’ efficacy to classify radiographs at image resolutions ranging from 32 × 32 pixels to 600 × 600 pixels.</p>



<p>The authors found that the performance of most models tended to plateau at resolutions of around 256 × 256 pixels and 320 × 320 pixels. However, classification of emphysema and lung nodules performed better at 512 × 512 pixels and 448 × 448 pixels, respectively, than at lower resolutions. Emphysema findings can be subtle in mild cases, manifested by faint lucencies, which probably explains the need for higher resolution. Similarly, small lung nodules may be “blurred out” and not visible at lower resolution, which can explain the improvement in classification performance at higher resolutions.</p>



<p>The authors’ work is important. As we move further in the application of AI in medical imaging, we should be more cognizant of the potential impact of image resolution on the performance of AI models, whether for segmentation, classification, or another task. Moreover, groups who create public datasets to advance machine learning in medical imaging should consider releasing the images at full or near-full resolution. This would allow researchers to further understand the impact of image resolution and could lead to more robust models that better translate into clinical practice.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/">The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-importance-of-image-resolution-in-building-deep-learning-models-for-medical-imaging/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building Better Deep Learning Requires New Approaches Not Just Bigger Data</title>
		<link>https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/</link>
					<comments>https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 08 Jul 2019 12:51:12 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Bigger Data]]></category>
		<category><![CDATA[Building]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Requires]]></category>
		<category><![CDATA[universal solver]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=4001</guid>

					<description><![CDATA[<p>Source: forbes.com In its rush to solve all the world’s problems through deep learning, Silicon Valley is increasingly embracing the idea of AI as a universal solver <a class="read-more-link" href="https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/">Building Better Deep Learning Requires New Approaches Not Just Bigger Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: forbes.com</p>



<p>In its rush to solve all the world’s problems through deep learning, Silicon Valley is increasingly embracing the idea of AI as a universal solver that can be rapidly adapted to any problem in any domain simply by taking a stock algorithm and feeding it relevant training data. The problem with this assumption is that today’s deep learning systems are little more than correlative pattern extractors that search large datasets for basic patterns and encode them into software. While impressive compared to the standards of previous eras, these systems are still extraordinarily limited, capable only of identifying simplistic correlations rather than actually semantically understanding their problem domain. In turn, the hand-coded era’s focus on domain expertise, ethnographic codification and deeply understanding a problem domain has given way to parachute programming in which deep learning specialists take an off-the-shelf algorithm, shove in a pile of training data, dump out the resulting model and move on to the next problem. Truly advancing the state of deep learning and way in which companies make use of it will require a return to the previous era’s focus on understanding problems rather than merely churning canned models off assembly lines.</p>



<p>In the era of hand-coded content understanding systems and hand-tuned classical statistical machine learning algorithms, building solutions required deeply understanding the problem domain. Programmers would work hand-in-hand with subject matter experts, deeply immersing themselves in the field, studying human practitioners with the precision and detail of an ethnographic study and even perform the task themselves to learn its complexities and nuances.</p>



<p>Building a solution required deeply understanding the problem domain.</p>



<p>In contrast, today’s deep learning practitioners adhere to the utopian dream of galleries of canned models that can be simply plunked from a shelf, shoved full of raw training data from watching humans perform the task and then simply dropped in to take over, without its programmers needing to know a single thing about the problem the model is designed to solve.</p>



<p>While the idea of HAL 9000-like general intelligences capable of taking on any task represents the holy grail of AI research, we are very far from such systems even being on the horizon. Instead, today’s systems are more akin to&nbsp;glorified correlation engines&nbsp;that can perform a single task reasonably well, provided they are given properly curated training data.</p>



<p>Today’s deep learning algorithms are entirely dependent on the quality of their training data, since it represents the totality of their worldview and understanding of the problem domain.</p>



<p>This means that training data must be exquisitely curated, balanced to provide sufficient examples and counterexamples at the boundary points where the underlying learning algorithm struggles. The problem is that these boundary points are rarely well understood.</p>
<p>The post <a href="https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/">Building Better Deep Learning Requires New Approaches Not Just Bigger Data</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-better-deep-learning-requires-new-approaches-not-just-bigger-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
