<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Human intelligent Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/human-intelligent/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/human-intelligent/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Fri, 05 Jun 2020 06:44:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Cyber Everywhere: Expand security capabilities with AI tools</title>
		<link>https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/</link>
					<comments>https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Jun 2020 06:41:21 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI tools]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Human intelligent]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9285</guid>

					<description><![CDATA[<p>Source: cyberscoop.com Public and private sector enterprises need to consider expanding their use of AI-augmented cybersecurity tools to better defend their networks and assets, say experts in <a class="read-more-link" href="https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/">Cyber Everywhere: Expand security capabilities with AI tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: cyberscoop.com</p>



<p>Public and private sector enterprises need to consider expanding their use of AI-augmented cybersecurity tools to better defend their networks and assets, say experts in a new podcast.</p>



<p>As the range of cyberthreats continues to expand, and organizations remain hard-pressed to hire enough talent to keep up, cyber experts recommend that executives explore AI tools to help assess and automate their security posture.</p>



<p>Security veteran Irfan Saif says that AI represents a range of different concepts such as intelligent automation, analytics and conversational AI as well as more sophisticated capabilities that start to approach what may be considered human intelligence, he says.</p>



<p>Saif, a principle and board member at Deloitte, also urges enterprise executives to think about AI in the context of machines helping humans, which he sees as “a much more viable, sustainable and scalable approach rather than thinking about AI in the context of human replacement,” he explains.</p>



<p>Adding to the discussion, Deborah Golden, lead for Deloitte’s U.S. Cyber Risk Services Practice says that this idea of partnership between people and AI-enabled technology will help organizations address the shortage of cybersecurity talent.</p>



<p>Golden and Saif share recommendations for executives leading public and private sector organizations on ways AI can combat new cyberthreats in the latest episode of the “Cyber Everywhere” podcast series produced by CyberScoop and underwritten by Deloitte:</p>



<h3 class="wp-block-heading"><strong>Changes occurring in the cyberthreat landscape</strong></h3>



<p>“Bad actors — particularly those on the more sophisticated end of the spectrum — tend to adopt and adapt to changes in the technology landscape a bit faster than those that they are trying to attack,” says Saif.</p>



<p>He cautions that AI is being used against enterprises, noting instances where AI has been used to mimic the activity of legitimate users and bypass various detection measures, he says.</p>



<h3 class="wp-block-heading"><strong>Business case for AI-enabled tools</strong></h3>



<p>Golden says CIOs need to consider adopting AI-enabled tools to help available cyber talent achieve greater efficiencies at scale.</p>



<p>As the cyberthreat landscape continues to grow at exponentially, enterprises will need to keep investing in “structured and unstructured machine learning in a way that perhaps we’ve never looked at before,” just to keep pace, she says.</p>



<h3 class="wp-block-heading"><strong>Developing strategies and governance for AI</strong></h3>



<p>Saif says that the notion of “trustworthy AI” is gaining currency among security experts. The goal is to build a common language and framework to govern AI as a strategy and as a program “from the boardroom down to the server room.”</p>



<p>“That is effectively taking critical principles of trust — whether that’s ethics, whether that’s explainability — all the sorts of things that people really want to understand when it comes to how to apply AI to business problems, how to manage and govern the data, and the inputs, the outputs and the use of that information,” Saif says.</p>



<p>Irfan Saif currently co-leads Deloitte’s U.S. artificial intelligence and cognitive advisory offering. He has more than 20 years of IT consulting experience, specializing in cybersecurity and risk management.</p>



<p>Deborah Golden has more than 25 years of IT experience spanning numerous industries, including government, life sciences, health care and financial services. She specializes in cybersecurity, technology transformation and privacy and governance initiatives.</p>



<p>Listen to the podcast for the full conversation on AI-augmented cybersecurity. You can hear more coverage of “Cyber Everywhere” on our CyberScoop radio channels on Apple Podcasts, Spotify, Google Play, Stitcher and TuneIn.</p>



<p>This podcast was produced by CyberScoop and underwritten by Deloitte.</p>
<p>The post <a href="https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/">Cyber Everywhere: Expand security capabilities with AI tools</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/cyber-everywhere-expand-security-capabilities-with-ai-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Are AI machines really intelligent?</title>
		<link>https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/</link>
					<comments>https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 14 Nov 2017 06:37:39 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[AI machines]]></category>
		<category><![CDATA[digital computers]]></category>
		<category><![CDATA[Human intelligent]]></category>
		<category><![CDATA[IT]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1703</guid>

					<description><![CDATA[<p>Source &#8211; desertsun.com Day after day we read in our news media, and hear on the radio or TV, these two letters: &#8220;AI&#8221; meaning ARTIFICIAL INTELLIGENCE. What do we <a class="read-more-link" href="https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/">Are AI machines really intelligent?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>desertsun.com</strong></p>
<p class="speakable-p-1 p-text">Day after day we read in our news media, and hear on the radio or TV, these two letters: &#8220;AI&#8221; meaning ARTIFICIAL INTELLIGENCE. What do we really know about AI?</p>
<p class="speakable-p-2 p-text">Let’s begin with a dictionary definition:</p>
<p class="p-text">AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. It is exhibited by machines which mimic “cognitive” functions that humans associate with other human minds, i.e. making  machines behave in ways that would be called intelligent if a human were so behaving.</p>
<p class="p-text">This leads to asking what is Human Intelligence (HI), the definition of which is very controversial. An op-ed statement in the Wall Street Journal appeared in a 1995 with 52 researchers agreeing to define HI as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — &#8220;catching on,&#8221; &#8220;making sense&#8221; of things, or &#8220;figuring out&#8221; what to do.</p>
<p class="p-text">We all have been given IQ tests, purported to measure the level of our Intelligence; however, if we are to accept the above definition of HI it must be questioned if a machine with AI will ever actually comprehend its surroundings — “catch on,&#8221; &#8220;make sense&#8221; of things, or &#8220;figure out&#8221; what to do.</p>
<p class="p-text">Today, much of routine technology such as optical character or face recognition is excluded from AI. Companies such as Google’s Deepmind are attempting to develop programs which can learn to solve any complex problem without needing to be taught how, but their engineers admit they are suffering from painfully slow progress. In contacting academia, I have been told the primary goal of IT is to produce a machine which is able to perform any intellectual task that a human can do.</p>
<p class="p-text">This may be achieved when our present binary digital computers are replaced by super-efficient, super-fast computers using quantum bits instead of transistors. I do not truly understand the technology, so I must take this on faith! But if this goal is achieved, it would mean that a robot-soldier would react to a life-threatening situation in combat as a human soldier would react.</p>
<p class="p-text">Members of academia have presented tests machines would need to pass in order to be classified as having human-level AI. Scientist Alan Turing’s test is one: A machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine. If the human expressed strong anger or sadness, the machine would need to respond in kind. More recently in Tokyo, a research team is trying to create an AI program that has enough smarts to pass Japan’s most rigorous university entrance exams.</p>
<p class="p-text">Computers already show superhuman performance at many tasks, however they are not called Intelligent. Fool’s gold seems to be gold, but isn’t; AI seems to be intelligent, but isn’t yet.</p>
<p class="p-text">Yes, people should be concerned about the future of AI, however ARTIFICIAL INTELLlGENCE (AI) should not be used to describe self-driving vehicles and the many super-high-speed computing unintelligent machines of today!</p>
<p>The post <a href="https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/">Are AI machines really intelligent?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/are-ai-machines-really-intelligent/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>The Devil Is in the Detail of Deep Learning Hardware</title>
		<link>https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/</link>
					<comments>https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 02 Nov 2017 06:52:03 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[computer architecture]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[geological data]]></category>
		<category><![CDATA[Human intelligent]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1618</guid>

					<description><![CDATA[<p>Source &#8211; electronicdesign.com To identify skin cancer, perceive human speech, and run other deep learning tasks, chipmakers are editing processors to work with lower precision numbers. These numbers <a class="read-more-link" href="https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/">The Devil Is in the Detail of Deep Learning Hardware</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>electronicdesign.com</strong></p>
<p>To identify skin cancer, perceive human speech, and run other deep learning tasks, chipmakers are editing processors to work with lower precision numbers. These numbers contain fewer bits than those with higher precision, which require heavier lifting from computers.</p>
<p>Intel’s Nervana unit plans to release a special processor before the end of the year that trains neural networks faster than other architectures. But in addition to improving memory and interconnects, Intel created a new way of formatting numbers for lower precision math. The numbers weigh fewer bits so the hardware can use less silicon, less computing power, and less electricity.</p>
<p>Intel’s numerology is an example of the dull and yet strangely elegant ways that chip companies are coming to grips with deep learning. It is still unclear whether ASICs, FPGAs, CPUs, GPUs, or other chips will be best at handling calculations like the human brain does. But every chip appears to be using lower precision math to get the job done.</p>
<p>Still, companies pay a surcharge for using numbers with less detail. “You are giving up something, but the question is whether it’s significant or not,” said Paulius Micikevicius, principal engineer in Nvidia’s computer architecture and deep learning research group. “At some point you start losing accuracy, and people start playing games to recover it.”</p>
<p>Shedding with precision is nothing new, he said. For over five years, oil and gas companies have stored drilling and geological data in half-precision numbers – 16-bit floating point – and run calculations with single-precision – 32-bit floating point – on Nvidia’s graphics chips, which are the current gold standard for training and running deep learning.</p>
<p>In recent years, Nvidia has edited its graphic chips to reduce computing power wasted in training deep learning programs. Its older Pascal architecture performs 16-bit math twice as efficiently as 32-bit operations. Its latest Volta architecture runs 16-bit operations inside custom tensor cores, which speedily move data through the layers of a neural network.</p>
<p>Intel’s new format maximizes the precision that can be stored in 16-bits. FlexPoint can represent a slightly wider range of numbers than traditional fixed-point formats, which can be handled with less computing power and memory. But it seems to provide less flexibility than floating-point numbers commonly used with neural networks.</p>
<p>Different parts of deep learning need different levels of precision. Training entails going through, for example, thousands of photographs without explicit programming. An algorithm automatically adjusts millions of connections between the layers of the neural network, and over time it creates a model for interpreting new data. This requires high-precision math, typically 32-bit floating point</p>
<p>The inferencing phase is actually running the algorithm. This can take advantage of lower precision, which means lower power and cooling costs in data centers. In May, Google said that its tensor processing unit (TPU) runs 8-bit integer operations for inferencing. The company claims that it provides six times the efficiency of 16-bit floating point numbers.</p>
<p>Fujitsu is working on a proprietary processor that also takes advantage of 8-bit integers for inferencing, while Nvidia’s tensor cores settle for 16-bit floating-point for the same chores. Microsoft invented an 8-bit floating-point format to work with the company’s Brainwave FPGAs installed in its data centers.</p>
<p>Training with anything lower than 32-bit numbers is hard. Some numbers turn into zeros when represented in 16-bits. That can make the model less accurate and potentially prone to misidentify, for example, a skin blemish as cancer. But lower precision could pay big dividends for training, which uses more processing power than inferencing.</p>
<p>Micikevicius and researchers from Baidu recently published a paper on programming tips to train models with half-precision numbers without losing accuracy. This mixed-precision training means that computers can use half the memory without changing parameters like the number of neural network layers. That can mean faster training.</p>
<p>Micikevicius said that one tactic is to keep a master copy of weights. These weights are updated with tiny numbers called gradients to strengthen the link between neurons in the layers of the network. The master copy stores information in 32-bit floating point and it constantly checks that the 16-bit gradients are not taking the model off course.</p>
<p>The paper also provides a way to preserve the gradients, which can be so small that they turn into zeros when trimmed down to the half-precision format. To prevent these errant zeros from throwing off the model, Micikevicius said that the computer needs to multiply the gradients so that they are all above the point where they can be safely dumbed down.</p>
<p>These programming ploys take advantage of Nvidia’s tensor cores. The silicon can multiply 16-bit numbers and accumulate the results into 32-bits, ideal for the matrix multiplications used in training deep learning models. Nvidia claims that its Volta graphics chips can run more than 10 times more training operations per second than those based on Pascal.</p>
<p>Nvidia’s tensor cores, while unique, are part of the zeitgeist. Wave Computingand Graphcore have both built server chips that accumulate 16-bit multiplier into 32-bits for training operations, while Intel’s Nervana hardware accumulates them into a 48-bit format. Other companies like Groq and Cerebras Systems could take similar tacks.</p>
<p>The industrywide shift could also sow confusion about performance. Last year, Baidu created an open-source benchmark called DeepBench to clock processors for neural network training. It lays out minimum precision requirements of 16-bit floating point for multiplication and 32-bit floating point for addition. It recently extended the benchmark to inferencing.</p>
<p>“Deep learning developers and researchers want to train neural networks as fast as possible. Right now, we are limited by computing performance,” said Greg Diamos, senior researcher at Baidu’s Silicon Valley research lab, in a statement. “The first step in improving performance is to measure it.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/">The Devil Is in the Detail of Deep Learning Hardware</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-devil-is-in-the-detail-of-deep-learning-hardware/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
