<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/artificial/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/artificial/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Mon, 12 Jul 2021 09:25:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Artificial Life: A cross disciplinary field of research</title>
		<link>https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/</link>
					<comments>https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 12 Jul 2021 09:25:46 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[disciplinary]]></category>
		<category><![CDATA[Life]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14900</guid>

					<description><![CDATA[<p>Source &#8211; https://www.risingkashmir.com/ Artificial life may be labeled software, hardware, or wetware, depending on the type of media researchers work with Artificial life is devoted to the <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/">Artificial Life: A cross disciplinary field of research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.risingkashmir.com/</p>



<p>Artificial life may be labeled software, hardware, or wetware, depending on the type of media researchers work with</p>



<p>Artificial life is devoted to the study and creation of life like structures in various media (computational, biochemical, mechanical, or combinations of these). A central aim is to model and even realize emergent properties of life, such as self-reproduction, growth, development, evolution, learning, and adaptive behavior. Researchers of artificial life also hope to gain general insights about self-organizing systems, and to use the approaches and principles in technology development.&nbsp;&nbsp;Evolution of research&nbsp;The historical and theoretical roots of the field are manifold. These roots include:&nbsp;(1) Early attempts to imitate the behavior of humans and animals by the invention of mechanical automata in the sixteenth century.(2) Cybernetics as the study of general principles of informational control in machines and animals.&nbsp;(3) Computer science as theory and the idea of abstract equivalence between various ways to express the notion of computation, including physical instantiations of systems performing computations.&nbsp;(4) Computer science as a set of technical practices and computational architectures.(5) Artificial intelligence (AI) and robotics.&nbsp;Despite the field’s long history, the first international conference for artificial life was not held until 1987. Computer scientist C. G. Langton, who sketched a future synthesis of the field’s various roots and formulated important elements of a research program, organized the conference. As in artificial intelligence research, some areas of artificial life research are mainly motivated by the attempt to develop more efficient technological applications by using biologic inspired principles. Examples of such applications include modeling architectures to simulate complex adaptive systems, as in traffic planning, and biologically inspired immune systems for computers. Other areas of research are driven by theoretical questions about the nature of emergence, the origin of life, and forms of self-organization, growth, and complexity.&nbsp;&nbsp;The media of artificial life&nbsp;Artificial life may be labeled software, hardware, or wetware, depending on the type of media researchers work with. Software artificial life is rooted in computer science and represents the idea that form, or forms of organization, rather than characterize life by its constituent material. Thus, “life” may be realized in some form (or media) other than carbon chemistry, such as in a computer’s central processing unit, or in a network of computers, or as computer viruses spreading through the Internet. One can build a virtual ecosystem and let small component programs represent species of prey and predator organisms competing or co- operating for resources like food.&nbsp;&nbsp;The difference between this type of artificial life and ordinary scientific use of computer simulations is that, with the latter, the researcher attempts to create a model of a real biological system (e.g., fish populations of the Atlantic Ocean) and to base the description upon real data and established biologic principles. The researcher tries to validate the model to make sure that it represents aspects of the real world. Conversely, an artificial life model represents biology in a more abstract sense; it is not a real system, but a virtual one, constructed for a specific purpose, such as investigating the efficiency of an evolutionary process of a Lamarckian type (based upon the inheritance of acquired characters) as opposed to Darwinian evolution (based upon natural selection among randomly produced variants). Such a biologic system may not exist anywhere in the real universe. Artificial life investigates “the biology of the possible” to remedy one of the in- adequacies of traditional biology, which is bound to investigate how life actually evolved on Earth, but cannot describe the borders between possible and impossible forms of biologic processes. For example, an artificial life system might be used to determine whether it is only by historical accident that organisms on Earth have the universal genetic code that they have, or whether the code could have been different.&nbsp;&nbsp;It has been much debated whether virtual life in computers is nothing but a model on a higher level of abstraction, or whether it is a form of genuine life, as some artificial life researchers maintain. In its computational version, this claim implies a form of Platonism whereby life is regarded as a radically medium-independent form of existence similar to futuristic scenarios of disembodied forms of cognition and AI that may be downloaded to robots. In this debate, classical philosophical issues about dualism, monism, materialism, and the nature of information are at stake, and there is no clear-cut demarcation between science, metaphysics, and issues of religion and ethics. If it really is possible to create genuine life “from scratch” in other media, the ethical concerns related to this research are intensified: In what sense can the human community be said to be in charge of creating life de novo by non-natural means?&nbsp;&nbsp;Hardware artificial life refers to small animal-like robots, usually called animats, that researchers build and use to study the design principles of autonomous systems or agents. The functionality of an agent (a collection of modules, each with its own domain of interaction or competence) is an emergent property of the intensive interaction of the system with its dynamic environment. The modules operate quasi-autonomously and are solely responsible for the sensing, modeling, computing or reasoning, and motor control that is necessary to achieve their specific competence. Direct coupling of perception to action is facilitated by the use of reasoning methods, which operate on representations that are close to the information of the sensors.&nbsp;&nbsp;This approach states that to build a system that is intelligent, it is necessary to have its representations grounded in the physical world. Representations do not need to be explicit and stable, but must be situated and “embodied.” The robots are thus situated in a world; they do not deal with abstract descriptions, but with the environment that directly influences the behavior of the system. In addition, the robots have “bodies” and experience the world directly, so that their actions have an immediate feedback upon the robot’s own sensations. Computer-simulated robots, on the other hand, may be “situated” in a virtual environment, but they are not embodied. Hardware artificial life has many industrial and military technological applications.&nbsp;&nbsp;Wetware artificial life comes closest to real biology. The scientific approach involves conducting experiments with populations of real organic macromolecules (combined in a liquid medium) in order to study their emergent self- organizing properties. An example is the artificial evolution of ribonucleic acid molecules (RNA) with specific catalytic properties. (This research may be useful in a medical context or may help shed light on the origin of life on Earth.) Research into RNA and similar scientific programs, however, often take place in the areas of molecular biology, biochemistry and combinatorial chemistry, and other carbon-based chemistries. Such wetware research does not necessarily have a commitment to the idea, often assumed by researchers in software artificial life, that life is a composed of medium-in- dependent forms of existence.&nbsp;&nbsp;Thus wetware artificial life is concerned with the study of self-organizing principles in “real chemistries.” In theoretical biology, ‘autopoiesis’is a term for the specific kind of self-maintenance produced by networks of components producing their own components and the boundaries of the network in processes that resemble organizationally closed loops. Such systems have been created artificially by chemical components not known in living organisms.&nbsp;&nbsp;Conclusion&nbsp;Questions of theology are rarely discussed in artificial life research, but the very idea of a human researcher “playing God” by creating a virtual universe for doing experiments (in the computer or the test tube) with the laws of growth, development, and evolution shows that some motivation for scientific research may still be implicitly connected to religious metaphors and modes of thought.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/">Artificial Life: A cross disciplinary field of research</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-life-a-cross-disciplinary-field-of-research/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Rethinking The Artificial Intelligence Race – Analysis</title>
		<link>https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/</link>
					<comments>https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 01 Mar 2021 06:41:06 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[analysis]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[Minds]]></category>
		<category><![CDATA[Race]]></category>
		<category><![CDATA[Rethinking]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13133</guid>

					<description><![CDATA[<p>Source &#8211; https://www.eurasiareview.com/ Artificial intelligence (AI) has become a buzzword in technology in both civilian and military contexts. With interest comes a radical increase in extravagant promises, wild speculation, <a class="read-more-link" href="https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/">Rethinking The Artificial Intelligence Race – Analysis</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.eurasiareview.com/</p>



<p>Artificial intelligence (AI) has become a buzzword in technology in both civilian and military contexts. With interest comes a radical increase in extravagant promises, wild speculation, and over-the-top fantasies, coupled with funding to attempt to make them all possible. In spite of this fervor, AI technology must overcome several hurdles: it is costly, susceptible to data poisoning and bad design, difficult for humans to understand, and tailored for specific problems. No amount of money has eradicated these challenges, yet companies and governments have plunged headlong into developing and adopting AI wherever possible. This has bred a desire to determine who is “ahead” in the AI “race,” often by examining who is deploying or planning to deploy an AI system. But given the many problems AI faces as a technology its deployment is less of a clue about its quality and more of a snapshot of the culture and worldview of the deployer. Instead, measuring the AI race is best done by not looking at AI deployment but by taking a broader view of the underlying scientific capacity to produce it in the future.</p>



<h2 class="wp-block-heading" id="h-ai-basics-the-minds-we-create"><strong>AI Basics: The Minds We Create</strong></h2>



<p>AI is both a futuristic fantasy as well as an omnipresent aspect of modern life. Artificial intelligence is a wide term that broadly encompasses anything that simulates human intelligence. It ranges from the narrow AI already present in our day-to-day lives that focuses on one specific problem (chess playing programs, email spam filters, and Roombas) to the general artificial intelligence that is the subject of science fiction (Rachel from <em>Blade Runner</em>, R2-D2 in <em>Star Wars</em>, and HAL 9000 in <em>2001: A Space Odyssey</em>). Even the narrow form that we currently have and continually improve, can have significant consequences for the world by compressing time scales for decisions, automating repetitive menial tasks, sorting through large masses of data, and optimizing human behavior. The dream of general artificial intelligence has been long deferred and is likely to remain elusive if not impossible, and most progress remains with narrow AI. As early as the 1950’s researchers were conceptualizing thinking machines and developed rudimentary versions of them that evolved into “simple” everyday programs, like computer opponents in video games.</p>



<p>Machine learning followed quickly, but underwent a renaissance in the early 21st century when it became the most common method of developing AI programs, to the extent that it has now become nearly synonymous with AI. Machine learning creates algorithms that allow computers to improve by consuming large amounts of data and using past “experience” to guide current and future actions. This can be done through supervised learning, where humans provide correct answers to teach the computer; unsupervised learning, where the machine is given unlabeled data to find its own patterns; and reinforced learning, where the program uses trial and error to solve problems and is rewarded or penalized based on its decision. Machine learning has produced many of the startling advances in AI over the last decade such as drastic improvements to facial recognition and self-driving cars, and has given birth to a method that seeks to use the lessons of biology to create systems that process data similar to brains: deep learning. This is characterized by artificial neural networks where data is broken down to be examined by “neurons” that individually handle a specific question (e.g. whether an object in a picture is red) and describes how confident it is in its assessment, and the network compiles these answers for a final assessment.</p>



<p>But despite the advances that AI has undergone since the machine learning renaissance and its nearly limitless theoretical applications, it remains opaque, fragile, and difficult to develop.</p>



<h2 class="wp-block-heading" id="h-challenges-the-human-element"><strong>Challenges: The Human Element</strong></h2>



<p>The way that AI systems are developed naturally creates doubts about their ability to function in untested environments, namely the requirement of large amounts of data inputs, the necessity that they be nearly perfect, and the effects of the preconceived notions of its creators. First, lack of, or erroneous, data is one of the largest challenges, especially when relying on machine learning techniques. To teach a computer to recognize a bird, it must be fed thousands of pictures to “learn” a bird’s distinguishing features, which naturally limits use in fields with few examples. Additionally, if even a tiny portion of the data is incorrect (as little as 3%), the system may develop incorrect assumptions or suffer drastic decreases in performance. Finally, the system may also recreate assumptions and prejudices—racist, sexist, elitist, or otherwise—from extant data that already contains inherent biases, such as resume archives or police records. These could also be coded in as programmers inadvertently impart their own cognitive biases into the machine learning algorithms they design.</p>



<p>This propensity for deep-seated decision-making problems, which may only become evident well after development, will prove problematic to those that want to rely heavily on AI, especially concerning issues of national security. Because of the inherent danger of ceding critical functions to untested machines, plans to deploy AI programs should not be seen primarily as a reflection of their own quality, but of an organization’s culture, risk tolerance, and goals.</p>



<p>The acceptability of some degree of uncertainty also exacerbates the difficulties in integrating AI with human overseers. One option is a human-in-the-loop system where human overseers are integrated throughout the decision process. Another is human-on-the-loop system where the AI remains nearly autonomous with only minor human oversight. In other words, organizations must decide whether to give humans the ability to override a machine’s possibly better decision that they cannot understand. The alternative is to cede human oversight that may prevent disasters that might be obvious to organic minds. Naturally, the choice will depend on the stakes: militaries may be much more likely to allow a machine to control leave schedules without human guidance rather than anti-missile defenses.</p>



<p>Again, as with doubt about decision integrity, the manner in which an organization integrates AI into the decision-making process can tell us a great deal. Having a human-in-the-loop system signals that an organization would like to improve the efficiency of a system considered mostly acceptable as is. A human-on-the-loop system signals greater risk tolerance, but also betrays a desire to exert more effort to catch up to, or surpass, the state of the art in the field.</p>



<h2 class="wp-block-heading" id="h-the-global-ai-race-measuring-the-unmeasurable"><strong>The Global AI Race: Measuring the Unmeasurable</strong></h2>



<p>Research and development funding is a key component of scientific advances in the modern world, and is often relied on as a metric to chart progress in AI. The connection is often specious, however; the scientific process is often filled with dead ends, ruined hypotheses, and specific research questions with no broader significance. This last point is particularly salient to artificial intelligence because of the tailored nature of specific AI applications, which requires a different design for each problem it tackles. AI that directs traffic, for example, is completely worthless at driving cars. For especially challenging questions (e.g. planning nuclear strategy), development is an open-ended financial commitment with no promise of results.</p>



<p>It becomes difficult, therefore, to accurately assess achievement by simply using the amount spent on a project as a proxy for progress. Perhaps money is being spent on dead ends, an incorrect hypothesis, or even to fool others into thinking that progress is being made. Instead, we should see money as a reflection of what the spender values. Project spending then is not an effective metric of the progress of AI development, but of how important a research question is to the one asking it.</p>



<p>But that importance provides a value for analysis, regardless of its inapplicability to measuring the AI race: the decision-making process can speak volumes about the deployer’s priorities, culture, risk tolerance, and vision. Ironically, the manner in which AI is deployed says far more about the political, economic, and social nature of the group deploying it than it does about technological capability or maturity. In that way, deployment plans offer useful information for others. This is particularly valid in examinations of government plans. Examination of plans have produced insight such as using Chinese AI documents to deduce where they see weakness in their own IT economy, finding that banks overstate the use of chatbots to appear convenient for their customers, or noting that European documents attempt to create a distinctive European approach to the development of AI in both style and substance. It is here that examinations of AI deployment plans offer their real value.</p>



<p>There are instead much better ways to measure progress in AI. While technology rapidly changes, traditional metrics of scientific capacity provide a more nuanced base to measure AI from and are harder to manipulate, which makes them more effective than measuring the outputs of AI projects. The most relevant include: scientists as a proportion of population, papers produced and number of citations, research and development spending generally (as opposed to the focus on specific projects), and number of universities and STEM students. Measuring any scientific process is naturally fraught with peril due to the potential for dead-end research, but taken broadly these metrics give a far better picture of the ability of a state or organization to innovate in AI technology. Multiple metrics should always be used however; any focus on a specific metric (e.g. research spending) will make it just as easy to game the system as relying on AI deployment does. Such a narrow focus also distorts the view of the AI landscape. Consider, for example, the intense insecurity over the position of the United States despite its continuing leadership in terms of talent, number of papers cited, and quality of universities.</p>



<h2 class="wp-block-heading" id="h-recharging-the-scientific-base"><strong>Recharging the Scientific Base</strong></h2>



<p>The U.S. National Security Commission on AI draft report notes, “The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership.” This statement encapsulates the nature of the AI race, and naturally, measuring it. If a government or a company wishes to take a leadership position in the race, the goal should be to stimulate the base that will produce it, not actively promote a specific project, division, or objective. This involves tried and true (but oft neglected) policies like promoting STEM education, training new researchers internally, attracting foreign talent with incentives, providing funding for research and development (especially if it forms a baseline for future work such as computer security or resilience), and ensuring that researchers have access to the IT hardware that they need through adequate manufacturing and procurement processes.</p>



<p>These suggestions are often neglected in the United States in particular because of intense politicization of domestic priorities such as education policy (affecting universities), immigration policy (affecting the attraction of foreign talent), and economic policy (affecting manufacturing and procurement). At the same time, it is not only about providing more funding but streamlining processes that enable scientific capacity. For example, the system for receiving scientific research grants is byzantine, time-consuming, and stifling with different government agencies having overlapping funding responsibilities. Efforts should be made to ensure that applying for grants is not only easier, but that it promotes broader scientific inquiries. By solving problems like these, leaders invest in the components that will create the winning position in the AI race, and observers can determine who is making the strides to lead now, as well as in the future.</p>



<p>In the information age, the deployment of new technologies and their level of advancement have become key metrics in measuring power and effectiveness, but these are often flawed. Particularly for AI projects, research budgets, task assignments, and roles relative to humans demonstrate little about the state of the technology itself. Given the many fundamental problems with deploying AI, risk tolerance and strategic culture play much more of a role in determining how it is carried out: the more risk tolerant an organization is and the more it feels challenged by competitors, the more likely it will adopt AI for critical functions. Rather than examining AI deployment plans to see which country or organization is “ahead,” we should use them to study their worldview and strategic outlook. Instead, we should rely on overall scientific capacity to determine pole positions in the AI race.</p>
<p>The post <a href="https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/">Rethinking The Artificial Intelligence Race – Analysis</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/rethinking-the-artificial-intelligence-race-analysis/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How Can Artificial Intelligence Help Medicine?</title>
		<link>https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/</link>
					<comments>https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 25 Feb 2021 05:21:46 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Can]]></category>
		<category><![CDATA[How]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[medicine]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=13070</guid>

					<description><![CDATA[<p>Source &#8211; https://www.healthtechzone.com/ Thanks to the technology that we have at our hands these days, our everyday lives are much easier. We are able to complete multiple <a class="read-more-link" href="https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/">How Can Artificial Intelligence Help Medicine?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.healthtechzone.com/</p>



<p>Thanks to the technology that we have at our hands these days, our everyday lives are much easier. We are able to complete multiple tasks without even leaving the comfort of our homes, stay updated on the latest news, and much more. Medicine was one of the areas that progressed immensely due to technological advancements, and we couldn&#8217;t be happier about it.</p>



<p>People receive the proper care, get accurate diagnostics, and the devices used to make every service both effective and efficient. In the past couple of years, there have been talks of implementing artificial intelligence in the medicinal sector as a way to improve this industry and make it near-perfect. We wanted to discuss the question of how can AI help this sector, but we are also going to take a look at one industry where AI is used to the fullest potential.</p>



<p><strong>Where is AI Used The Best?</strong></p>



<p>One of the industries that have managed to incorporate this technology and use its full potential is the online casino industry. Casino sites use artificial intelligence to protect their players, but also to enforce fair-play. Let us explain how.</p>



<p>In order for every player to have equal chances of winning, online casinos use Random Number Generators. This AI system creates random outcomes of each game and gives equal chances of winning to all players. Casimba Casino is a good example of an online casino that features this and the security AI, which we are about to explain.</p>



<p>The AI-powered security system at the aforementioned casino and many other casino sites goes by the name SSL-encryption software. This software takes all the data from the players and turns it into an unbreakable code, thus making it impossible for unwanted third parties to gain access. Both of these AI systems utilize algorithms to ensure safety and fair-play.</p>



<p><strong>How Wil AI Help Medicine?</strong></p>



<p>Through the use of algorithms, medicine can reap great benefits. Medicine sites can use SSL certificates to keep their patients’ data safe and out of harm’s way. Not only that, but this type of AI can also impact other areas such as radiology, pathology, cardiology, and ophthalmology. How? The algorithms are ever-learning and can analyze the data from various patients much faster than a doctor can. Then, by comparing it with other diagnoses, it can aid the doctor and pinpoint the exact treatment needed for that particular case.</p>



<p>These AI systems would aid in practices like diagnosis, treatment protocol development, patient monitoring, drug development, personalized medicine, and care.</p>



<p>Being efficient in this line of work is very important. Additionally, AI has the potential to be less prone to errors, which is a big advantage, especially when it comes to determining the diagnosis and the right treatment.</p>



<p>The only problem here is; AI is still in its development stages and authorities do not trust it that much to give it such an important role. While basic artificial intelligence is used in some sectors, many believe that it is still early to fully incorporate it. But, as technology keeps evolving, we do not doubt that AI will help medicine become much more effective.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/">How Can Artificial Intelligence Help Medicine?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-can-artificial-intelligence-help-medicine/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Artificial Intelligence? How Does AI Work?</title>
		<link>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/</link>
					<comments>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 19 Feb 2021 05:41:11 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[How]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[What]]></category>
		<category><![CDATA[work]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12931</guid>

					<description><![CDATA[<p>Source &#8211; https://www.business2community.com/ “Depending on who you ask, AI is either man’s greatest invention since the discovery of fire”, as Google’s CEO said at Google’s I/O 2017 <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">What is Artificial Intelligence? How Does AI Work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.business2community.com/</p>



<p>“Depending on who you ask, AI is either man’s greatest invention since the discovery of fire”, as Google’s CEO said at Google’s I/O 2017 keynote, or it is a technology that might one day make man superfluous. What’s inarguable is major companies have embraced AI as if it was one of the most important discoveries ever invented. In the US, Amazon, Apple, Microsoft, Facebook, IBM, SAS, and Adobe have all infused AI and machine learning throughout their operations, while in China the big four – Baidu, Alibaba, Tencent, Xiaomi – are coordinating with the government and all working on unique and almost siloed AI initiatives.</p>



<p>In her article Understanding Three Types of Artificial Intelligence, Anjali UJ explains “The term AI was coined by John McCarthy, an American computer scientist in 1956.” Anjali speaks of the following three types of AI, including:</p>



<ol class="wp-block-list"><li>Narrow Artificial Intelligence: AI that has been trained for a narrow task.</li><li>Artificial General Intelligence: AI containing generalized cognitive abilities, which understand and reason the environment the way humans do.</li><li>Artificial Super Intelligence: AI that surpasses human intelligence and allows machines to mimic human thought.</li></ol>



<p>AI is not a new technology, in reality, it’s decades old. In his MIT Technology Review article Is AI Riding a One-Trick Pony?, James Somers states “Just about every AI advance you’ve heard of depends on a breakthrough that’s three decades old.” Recent advances in chip technology, as well as improvements in hardware, software, and electronics have turned AI’s enormous potential into reality.</p>



<h2 class="wp-block-heading"><strong>Neural Nets</strong></h2>



<p>AI is founded on Artificial Neural Networks (ANN) or just “Neural Nets”, which are non-linear statistical data modelling tools used when the true nature of a relationship between input and output is unknown. In his article Machine Learning Applications for Data Center Optimization, Jim Gao describes neural nets as “a class of machine learning algorithms that mimic cognitive behavior via interactions between artificial neurons.” Neural nets search for patterns and interactions between features to automatically generate a best­ fit model.</p>



<p>They do not require the user to predefine a model’s feature interactions. Speech recognition, image processing, chatbots, recommendation systems, and autonomous software agents are common examples of machine learning. There are three types of training in neural networks; supervised, which is the most common, as well as unsupervised training and reinforcement learning. AI can be broken down into three areas:</p>



<h2 class="wp-block-heading"><strong>Machine Learning</strong></h2>



<p>A branch of computer science, machine learning explores the composition and application of algorithms that learn from data. These algorithms build models based on inputs and use those results to predict or determine actions and results, rather than following strict instructions.</p>



<p>Supervised learning’s goal is to learn a general rule that maps inputs to outputs and the computer is provided with example inputs as well as the desired outputs. With unsupervised learning, however, labeled data isn’t provided to the learning algorithm and it must find the input’s structure on its own. In reinforcement learning, the computer utilizes trial and error to solve a problem. Like Pavlov’s dog, the computer is rewarded for good actions it performs and the goal of the program is to maximize reward.</p>



<h2 class="wp-block-heading"><strong>Deep learning</strong></h2>



<p>A subset of machine learning, deep learning utilizes multi-layered neural nets to perform classification tasks directly from image, text, and/or sound data. In some cases, deep learning models are already exceeding human-level performance. Google Meet’s ability to transcribe a human voice during a live conference call is an example of deep learning’s impressive capabilities.</p>



<p>ML and deep learning are useful for personalization marketing, customer recommendation, spam filtering, fraud detection, network security, optical character recognition (OCR), computer vision, voice recognition, predictive asset maintenance, sentiments analysis, language translations, and online search, among others.</p>



<h2 class="wp-block-heading"><strong>7 Patterns of AI</strong></h2>



<p>In her Forbes article The Seven Patterns of AI, Kathleen Walch lays out a theory that, regardless of the application of AI, there are seven commonalities to all AI applications. These are “hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems.” Walch adds that, while AI might require its own programming and pattern recognition, each type can be combined with others, but they all follow their own pretty standard set of rules.</p>



<p>The ‘Hyperpersonalization Pattern’ can be boiled down to the slogan, ‘Treat each customer as an individual’. ‘Autonomous systems’ will reduce the need for manual labor. Predictive analytics portends “some future value for data, predicting behavior, predicting failure, assisted problem resolution, identifying and selecting best fit, identifying matches in data, optimization activities, giving advice, and intelligent navigation,” says Walch. The ‘Conversational Pattern’ includes chatbots, which allow humans to communicate with machines via voice, text, or image.</p>



<p>The ‘Patterns and Anomalies’ type utilizes machine learning to discern patterns in data and it attempts to discover higher-order connections between data points, explains Walch. The recognition pattern helps identify and determine objects within image, video, audio, text, or other highly unstructured data notes Walch. The ‘Goal-Driven Systems Pattern’ utilizes the power of reinforcement learning to help computers beat humans on some of the most complex games imaginable, including&nbsp;<em>Go&nbsp;</em>and&nbsp;<em>Dota 2</em>, a complicated multiplayer online battle arena video game.</p>



<h2 class="wp-block-heading"><strong><sup>Conclusion</sup></strong></h2>



<p>A few years ago, the AI hype had reached such a fever pitch that companies just had to add ‘AI’, ‘ML’, or ‘Deep Learning’ to their pitch decks, and funding flooded through the door. However, businesses are investing in AI powered solutions like AIOps to reduce IT operations cost. Today, investors are a little wiser to the fact that not all that glitters is AI gold, and a lot of companies who pitched themselves as AI experts really didn’t know the difference between a neural net and a&nbsp;<em>k</em>-means algorithm.</p>



<p>Jumping head-first into AI is a recipe for disaster. Only “1 in 3 AI projects are successful and it takes more than 6 months to go from concept to production, with a significant portion of them never making it to production—creating an AI dilemma for organizations,” says Databricks. Not only is AI old, but it is also a difficult technology to implement. Anyone delving into AI needs to have a strong understanding of technology, what it is, where it came from, what limitations might hold it back, so although AI is exceptional technology, the waters are deep. It is far from the panacea that many software companies claim it is. AI has had not one but two AI winters. CEOs looking to make a substantial investment in AI should be well aware of the old saying that ‘a fool and his money are easily parted’, as that fool could be an AI fool, too.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/">What is Artificial Intelligence? How Does AI Work?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-artificial-intelligence-how-does-ai-work/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MAKING DATA CENTER SMART: HOW ARTIFICIAL INTELLIGENCE HELPS?</title>
		<link>https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/</link>
					<comments>https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 18 Feb 2021 04:37:33 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[center]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[Smart]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12884</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/making-data-center-smart-how-artificial-intelligence-helps/ As data centers become enabler to a nation’s economy, employing artificial intelligence can yield higher benefits Artificial Intelligence (AI) plays a pivotal role in <a class="read-more-link" href="https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/">MAKING DATA CENTER SMART: HOW ARTIFICIAL INTELLIGENCE HELPS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/making-data-center-smart-how-artificial-intelligence-helps/</p>



<p>As data centers become enabler to a nation’s economy, employing artificial intelligence can yield higher benefits</p>



<p>Artificial Intelligence (AI) plays a pivotal role in capturing, processing, and analyzing data at much faster rate than ever, today! It is also becoming more efficient and useful to incorporate data elements and managing data centers.</p>



<p>With data becoming a pre-requisite to sustain almost every business operation for insight and business results, data centers are on the crux of this digital transformation. These physical facilities that house the computers and equipment power the information needs of the modern economy. Data centers provide seamless data backup and recovery facilities while supporting cloud storage applications and transactions. Apart from boosting economy, the data center ecosystem attracts many international tech companies for the nation. Moreover, the presence of data centers ensure an excellent investment climate and employment opportunities for the local community.</p>



<p>Despite their key role in bringing digital revolution, they are not without problems. According to Gartner analyst Dave Cappuccio, 80% of enterprises will shut down their traditional data centers by 2025. The figures are fitting considering the host of problems faced by traditional data centers like lack of readiness to upgrade, infrastructure challenges, environmental issues and more. And the remedy for this is leveraging artificial intelligence to enhance the data center functions and infrastructure.</p>



<p>As per a Forbes Insights report, in early 2020, artificial intelligence is poised to have a tremendous impact on data center management, productivity, and infrastructure. Meanwhile, its technologies continue to offer data centers’ potential&nbsp;solutions to improve operations over the long term. In return data centers enabled by accelerated computing capabilities of AI, would be able to process AI workloads more efficiently.</p>



<p>Data centers consume a lot of energy, so training an artificial intelligence network to improve power usage effectiveness (PUE) is a key goal. PUE is essential metric to measure data center efficiency. In 2014 by deploying Deepmind AI in one of its facilities, Google was able to consistently achieve a 40% reduction in the amount of energy used for cooling, which equated to a 15% reduction in overall PUE overhead after accounting for electrical losses and other non-cooling inefficiencies. It also produced the lowest PUE the site had ever seen. Deepmind analyzes over 100 different variables within the data center to improve efficiency and reduce power consumption.</p>



<p>Data centers are also susceptible to various cyber threats. Cybercriminals are always finding new ways to obtain data from data centers or launch their next data breach attack. By learning normal network behavior and detecting cyber threats based on deviation from that behavior, artificial intelligence proves to be resourceful again!&nbsp; Artificial algorithms can complement current Security Incidents and Event Management (SIEM) systems, by analyzing incidents and inputs from multiple systems, and devising an appropriate incident response system.</p>



<p>In a data center, IT devices are often deployed or removed from shelves that brings a lot of fragmented resources, like U space, which cannot be monitored or managed, and are easy to get wasted. By using intelligent hardware and IoT sensors, artificial intelligence allows effective data center infrastructure management that keeps a close eye on the data center and reduces repetitive work through automation.&nbsp;Here, data center managers can automate activities like temperature management, equipment status monitoring, floor security, fire hazards mitigation, ventilation, and cooling systems management. Coupled with predictive analytics, automation also helps in predictive maintenance at data centers.</p>



<p>Further, this AI-based predictive analysis can help data centers distribute workloads across the many servers in the firm. As a result, it will be easy to predict and manage data center loads more efficiently. It will also help in optimizing server storage systems, finding possible fault points in the system, improve processing times and reducing risk factors much faster.</p>



<p>Recently, MIT researchers had developed an AI system that automatically learns how to schedule data-processing operations across thousands of servers. This system was observed to be about 20 to 30% faster, and twice as fast during high-traffic times in completing key data center tasks. The researchers asserts that this artificial intelligence system could enable data centers to handle the same workload at higher speeds, using fewer resources.</p>



<p>Additionally, through deep learning (DL) applications, AI can predict failures and outages ahead of time. E.g.  HPE artificial intelligence predictive engine helps in identifying and resolving bottlenecks in the data center.  A survey of 200 companies highlighted that downtime results in losses surpassing US$26.5 billion, with the cost per minute of a network outage reaching approximately US$7,900. By monitoring server performance, network congestions, and disk utilization, AI can detect and predict data outages. Besides, it can implement mitigation strategies to help the data center recover from the data outage – thus adding to customer satisfaction and minimal losses during such outages.</p>
<p>The post <a href="https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/">MAKING DATA CENTER SMART: HOW ARTIFICIAL INTELLIGENCE HELPS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/making-data-center-smart-how-artificial-intelligence-helps/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What if artificial intelligence decided how to allocate stimulus money?</title>
		<link>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/</link>
					<comments>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 13 Feb 2021 06:31:01 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[allocate]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[decided]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[stimulus]]></category>
		<category><![CDATA[What]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12877</guid>

					<description><![CDATA[<p>Source &#8211; https://www.livemint.com/ New Treasury Department software points the way. But research suggests that it’s impossible to show that an artificial &#8216;superintelligence&#8217; can be contained If, like me, you’re worried about <a class="read-more-link" href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">What if artificial intelligence decided how to allocate stimulus money?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.livemint.com/</p>



<p>New Treasury Department software points the way. But research suggests that it’s impossible to show that an artificial &#8216;superintelligence&#8217; can be contained</p>



<p>If, like me, you’re worried about how members of Congress are supposed to vote on a stimulus bill so lengthy and complex that nobody can possibly know all the details, fear not — the Treasury Department will soon be riding to the rescue.</p>



<p>But that scares me a little too.</p>



<p>Let me explain. For the past few months, the department’s Bureau of the Fiscal Service has been testing software designed to scan legislation and correctly allocate funds to various agencies and programs in accordance with congressional intent — a process known as issuing Treasury warrants. Right now, human beings must read each bill line by line to work out where the money goes. If the program can be made to work, the savings will be significant.</p>



<p>Alas, there’s a big challenge. Plenty of tools exist for extracting data from HTML files (and, of course, XML files), but Congress initially publishes legislation only in PDF form; XML or HTML versions often arrive only weeks later. As many a business knows, scraping data from PDFs generally requires human intervention, leading to the possibility of copy errors. The trouble is that PDFs have no standard data format. Even “simple&#8221; methods for extraction generally are designed to work only if the data in question is already presented within the PDF in tabular form.</p>



<p>Treasury’s ambitious hope, however, is that its software, when fully operational, will be able to scan new legislation in its natural language form, figure out where the money is supposed to go and issue the appropriate warrants far more swiftly than humans could. The faster the warrants are issued, the sooner the agency that’s supposed to receive the money can start spending.</p>



<p>Pretty cool stuff.</p>



<p>Yet this snapshot of the future inspires a wicked train of thought. Suppose that the Treasury Department software — which you are free to describe as artificial intelligence or not, depending on your taste — is later replaced by a better program, then by a better one and finally by one that can mimic the working general intelligence of the human mind.</p>



<p>What’s to stop this future AI from deciding on its own that Congress was wrong to give another billion to Agency A when, in the judgment of the program, Agency B needs it more? The program makes a tiny adjustment in a gigantic spending bill, and given that nobody’s actually read it, nobody’s the wiser.</p>



<p>Sounds improbable, right? HAL 9000 meets “Person of Interest&#8221; meets Skynet?</p>



<p>Not so fast.</p>



<p>For technophiles like me, recent achievements in AI are exciting, even breathtaking. AI is credited with reorganizing supply chains to help overcome disruptions caused by the pandemic. Deep learning systems may be able to discover coronary plaques more accurately than clinicians.</p>



<p>So why worry? After all, most of those in the field, including my professors when I studied artificial intelligence as an undergraduate, are confident that tight programming will keep even the most advanced artificial intelligence from escaping the bounds set by its creators. (Think Isaac Asimov’s Laws of Robotics.)</p>



<p>But there have long been dissenters, even among the experts. The prospect of an out-of-control AI has haunted researchers in the field for almost as long as it’s haunted science fiction writers. One thinks of Joseph Weizenbaum’s “Computer Power and Human Reason,&#8221; published back in 1976, or even Norbert Wiener’s classic “God and Golem, Inc.,&#8221; based on lectures the author delivered in 1962.</p>



<p>All of which brings us to an unnerving paper published last month by six AI researchers who argue that it is impossible to show that an artificial “superintelligence&#8221; can be contained. The authors are an international group, representing universities in Germany, Spain, and Chile, as well as the U.S. According to their analysis, no matter how tightly an AI may be programmed, if it indeed possesses generalized reasoning skills “far surpassing&#8221; those of the most gifted humans, what they call “total containment&#8221; turns out to be incapable of formal proof.</p>



<p>Using what is known as computability theory, they hypothesize a superintelligent AI that incorporates a fundamental command never to harm humans. (Asimov again.) The programming will then require a function that decides whether a particular action will harm humans or not. They proceed to show that even if it’s possible “to articulate in a precise programming language&#8221; a perfect set of “control strategies&#8221; to implement this function, there’s no way to know for sure whether the strategies will in fact constrain the AI. (The proof, although technical, is rather elegant, and fun to read.)</p>



<p>Don’t get me wrong: I’m not arguing that the Treasury Department should abandon its quest for a system that extracts data from PDFs, any more than I’m suggesting that any of the countless researchers working on various aspects of AI should halt. I continue to find the prospect of true artificial intelligence as exciting as ever.</p>



<p>What concerns me, however, is the way that public critiques of AI tend to pick around the edges rather than go to the heart of the matter. We often charge nascent AI systems with enhancing bias — for example, by exacerbating rather than correcting disparities in the distribution of health care. Such issues are of undeniable public importance. But as the authors of the paper on computability remind us, you don’t have to be either a technophobe or a fan of apocalyptic steampunk sci-fi to see that the time for public conversation about the containability of AI is now, not later.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/">What if artificial intelligence decided how to allocate stimulus money?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-if-artificial-intelligence-decided-how-to-allocate-stimulus-money/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>THE ROLE OF ARTIFICIAL INTELLIGENCE AND ML IN INTELLIGENT ANALYTICS</title>
		<link>https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/</link>
					<comments>https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 27 Jan 2021 08:40:53 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Analytics]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[Intelligent]]></category>
		<category><![CDATA[ML]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12541</guid>

					<description><![CDATA[<p>Source &#8211; https://www.analyticsinsight.net/ AI and ML in intelligent analytics can drive the efficiency in business Analytics has been changing the way organizations operate for a long while. Since <a class="read-more-link" href="https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/">THE ROLE OF ARTIFICIAL INTELLIGENCE AND ML IN INTELLIGENT ANALYTICS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.analyticsinsight.net/</p>



<h1 class="wp-block-heading">AI and ML in intelligent analytics can drive the efficiency in business</h1>



<p>Analytics has been changing the way organizations operate for a long while. Since more organizations are dominating their utilization of analytics, they are diving further into their data to build proficiency, acquire a more prominent upper hand, and lift their bottom lines significantly more.</p>



<p>Analytics powers your business, however, what amount of value would you say you are truly harnessing from your data?</p>



<p>Artificial intelligence and machine learning can help. Artificial intelligence is a collection of technologies that extract patterns and valuable insights from huge datasets, then making forecasts dependent on that data. Truth be told, AI exists today that can assist you with getting more value out of the data you as of now have, bind together that data, and make forecasts about customer behaviors based on it.</p>



<p>The adoption of AI has been driven not just by increased computational power and new algorithms yet additionally the growth of data now accessible. For intelligence analysts, that multiplication of data implies surefire data over-burden. Human analysts essentially can’t adapt to that much information. They need assistance.</p>



<p>Intelligence leaders realize that AI can assist to adapt to this data downpour yet they may likewise consider what sway AI will have on their work and staff. For example, Twitter utilizes machine learning and AI to assess tweets in real-time and score them utilizing different measurements to show tweets that can possibly drive the most engagement.</p>



<p>Google is researching virtually every part of machine learning and is making advancements in old-style algorithms and different applications like speech translation, prediction systems, natural language processing, and search ranking.</p>



<p>Artificial intelligence plays a significant part in assisting organizations with handling data without forfeiting accuracy or speed.</p>



<p>With digital transformation widely being embraced, the volume and size of data have expanded significantly. Also, dealing with such gigantic data isn’t simple. Artificial intelligence- fueled data-driven innovation can help organizations manage such data to guarantee importance, worth, security, and transparency. They can depend on AI data integration platforms to ingest, change, and use information easily and with accuracy. Such platforms give an end-to-end encrypted environment that protects information from undesirable infringing and breaches, and make them hard to work with.</p>



<p>Artificial intelligence and ML frameworks exist that utilize analytics data to assist you with foreseeing results and effective blueprints. Artificial intelligence- empowered frameworks can analyze information from many sources and deliver forecasts about what works and what doesn’t. It can likewise deeply jump into information about your customers and offer predictions about buyer inclinations, marketing and sales channels, and product development strategies.</p>



<p>Artificial intelligence/ML advances empower companies across various industries to harness value from customer information with no trouble. For instance, AI data integration solutions empower all business users to map information between various fields to make it simpler to incorporate the data into a unified database. Since these arrangements can be effortlessly utilized by non-technical users, IT people need not assume full responsibility. This leaves IT to zero in on other vital tasks.</p>



<p>These solutions use ML algorithms to provide predictions of data, which can additionally quicken the data transformation process. Since the decisions are taken utilizing algorithms, the chance of mistakes like missing qualities, deceptions, errors, and so on, reduce. Hence, companies can use AI/ML tools to change the manner in which they deliver customer value. They can plan and integrate data and keep up data integrity, improving decision-making and boosting growth.</p>



<p>The advantages of AI and ML, notwithstanding, can go a long way beyond time savings. All things considered, intelligence work is a never-ending process; there is consistently another difficulty that demands attention. So saving time with AI won’t decrease the staff or trim intelligence budgets. Or maybe, the more noteworthy value of AI comes from what may be named an “automation dividend”: the better ways experts can utilize their time after these advances reduce their workload.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/">THE ROLE OF ARTIFICIAL INTELLIGENCE AND ML IN INTELLIGENT ANALYTICS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-role-of-artificial-intelligence-and-ml-in-intelligent-analytics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hearing aids now come with artificial intelligence. What does that mean?</title>
		<link>https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/</link>
					<comments>https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 25 Jan 2021 09:28:53 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[aids]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Hearing]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[mean]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12532</guid>

					<description><![CDATA[<p>Source &#8211; https://www.healthyhearing.com/ Fantastical notions of all-powerful robots, straight out of Hollywood, may come to mind when you think about artificial intelligence (AI). But set aside thoughts <a class="read-more-link" href="https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/">Hearing aids now come with artificial intelligence. What does that mean?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://www.healthyhearing.com/</p>



<p>Fantastical notions of all-powerful robots, straight out of Hollywood, may come to mind when you think about artificial intelligence (AI). But set aside thoughts of the machines taking over: When it comes to your hearing aides, AI helps the devices function better.&nbsp;</p>



<p>For instance, AI can help wrangle one of the most challenging situations if you struggle to hear: Engaging in a conversation when you’re in a crowded, loud space (think: a restaurant or cafe). Because as you know if you wear a hearing aid, louder isn’t the solution. </p>



<p>From month to month, year to year, researchers are finding more ways to harness this technology and use it to improve hearing aids. Here’s what you need to know about how hearing aids use AI—and if a hearing aid with this functionality is right for you or a loved one. </p>



<h2 class="wp-block-heading">Key terms: AI, machine learning, deep neural network&nbsp;</h2>



<p>Put simply,&nbsp;<strong>artificial intelligence</strong>&nbsp;is defined as the ability of a machine to simulate human intelligence, performing a set of tasks that require “intelligent” decisions by following predetermined rules.&nbsp;</p>



<p>“Artificial intelligence is a very broad definition. Machine learning, neural network, deep learning, and all of those, fall under the AI umbrella,” says Issa M.S. Panahi, PhD, professor of electrical and computer engineering in the Erik Jonsson School of Engineering and Computer Science at the University of Texas at Dallas. </p>



<p>Through&nbsp;<strong>machine learning</strong>, a subset of AI, machines use algorithms (aka, a set of rules) to sort through giant amounts of data and make decisions or predictions.&nbsp;</p>



<p>Go one level deeper, and we get to the&nbsp;<strong>deep neural network (DNN)</strong>: This form of AI is set up to mimic the neural habits of the brain, and aims to respond the same way your brain would, without being explicitly programmed how to react in a given situation.&nbsp;</p>



<p>You’re familiar with this technology if your inbox sorts emails into categories (important, promotional, etc.), if you take advantage of recommendations of &#8220;what to watch next&#8221; on streaming networks, or if you’ve marveled over self-parking cars. Some more mundane but important examples of deep learning include weather forecasting and credit card fraud protection. These tools have gotten much better in recent years due to deep learning. </p>



<h2 class="wp-block-heading">How hearing aids use AI&nbsp;</h2>



<p>“The AI that occurs in hearing aids has actually been going on for years, but it’s a slow burn to think about how that’s actually happened,” says Scott Young, Aud, CCC-A, owner of Hearing Solution Centers, Inc. in Tulsa, Okla. </p>



<p>Hearing aids used to be relatively simple, he notes, but when hearing aids introduced a technology known as wide dynamic range compression (WDRC) the devices actually began to make a few decisions based on what it heard, he says. </p>



<p>“Over the last several years, AI has come even further—it actually listens to what the environment does,” Scott says. And, it responds accordingly. Essentially, a DNN allows hearing aids to begin to mimic how your brain would hear sound if your hearing wasn’t impaired.&nbsp;&nbsp;</p>



<p>For hearing aids to work effectively, they need to adapt to a person’s individual hearing needs as well as all sorts of background noise environments, Panahi says. “AI, machine learning, and neural networks, are very good techniques to deal with such a complicated, nonlinear, multi-variable type of problem,” he says. </p>



<h2 class="wp-block-heading">What the research shows</h2>



<p>Researchers have been able to accomplish a lot with AI to date, when it comes to improving hearing.&nbsp;</p>



<p>For instance, researchers at the Perception and Neurodynamics Laboratory (PNL) at the Ohio State University trained a DNN to distinguish speech (what people want to hear) from other noise (such as humming and other background conversations), writes DeLiang Wang, professor of computer science and engineering at Ohio State University, in IEEE Spectrum. “People with hearing impairment could decipher only 29 percent of words muddled by babble without the program, but they understood 84 percent after the processing,” Wang writes. </p>



<p>And at University of Texas at Dallas, Panahi, along with co-principal investigator Dr. Linda Thibodeau, used AI to create a smartphone app that can tell the direction where speech is coming from. This app calls on models built using a massive library of sounds to identify and diminish background noise, so people hear better. Place a smartphone with the app on a table, or rest it in the GPS stand in your car, and “clean speech is transmitted to the hearing aid devices or earbuds,” Panahi says. </p>



<p>“The importance of AI is it overcom[es] a lot of issues that cannot be easily solved by a traditional mathematical approach for signal processing,” Panahi says. </p>



<h2 class="wp-block-heading">Neural-network powered hearing aids</h2>



<p>In recent years, major hearing aid manufacturers have been adding AI technology to their premium hearing aid models. For example Widex&#8217;s Moment hearing aid utilizes AI and machine learning to create hearing programs based on a wearer&#8217;s typical environments.</p>



<p>And this January, Oticon introduced its newest hearing aid device, Oticon More™, the first hearing aid with an on-board deep neural network. Oticon More was trained—using 12 million-plus real-life sounds—so that people wearing it can better understand speech and the sounds around them.</p>



<p>In a complicated &#8220;sound scene&#8221;—picture a bustling airport or hospital emergency room—the Oticon More&#8217;s neural net receives a complicated layer of sounds, known as input. The DNN gets to work, first scanning and extracting simple sound elements and patterns from the input. It builds these elements together to recognize and make sense of what&#8217;s happening. Lastly, the hearing aids then make a decision on how to balance the sound scene, making sure the output is clean and ideally balanced to the person&#8217;s unique type of hearing loss. </p>



<p>This improvement is especially key for speech in noise, explained Donald J. Schum, PhD, Vice President of Audiology at Oticon, during the product launch event.</p>



<p>&#8220;Speech and other sounds in the environment are complicated acoustic wave forms, but with unique patterns and structures that are exactly the sort of data deep learning is designed to analyze,&#8221; he said. &#8220;We wanted our system to be able to find speech even when it&#8217;s embedded in background noise. And that&#8217;s happening in real-time and in an ongoing basis.&#8221;</p>



<h2 class="wp-block-heading">Do I need a hearing aid with&nbsp;AI?&nbsp;</h2>



<p>Think of hearing aids as existing on a spectrum, says Young—hearing aids range widely in price, and some at the lower end have fewer&nbsp;AI-driven bells and whistles, he says.&nbsp;</p>



<p>He points out that some patients may not need all the features—people who live alone or rarely leave the house, and don’t find themselves in crowded scenarios often, for instance, might not benefit from the functionality found in higher-end models.&nbsp;</p>



<p>But for anyone who is out and about a lot, especially in situations where there are big soundscapes, AI-powered features allow for an improved hearing experience.</p>



<h3 class="wp-block-heading">Listening effort is reduced</h3>



<p>What &#8220;improvement&#8221; looks like&nbsp;can be measured in a lot of ways, but one key indicator is&nbsp;memory recall, Schum explained.&nbsp;It&#8217;s not that the hearing aids like Oticon More literally improve a person&#8217;s&nbsp;memory, he explained,&nbsp;it&#8217;s that artificial intelligence&nbsp;helps people spend&nbsp;less time trying to make sense of the noise around them, a process known as &#8220;listening effort.&#8221;</p>



<p>When the listening effort is more natural, a person can focus more on the conversation and all the nuances conveyed within.</p>



<p>&#8220;It&#8217;s allowing the brain to work in the most natural way possible,&#8221; he said.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/">Hearing aids now come with artificial intelligence. What does that mean?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/hearing-aids-now-come-with-artificial-intelligence-what-does-that-mean/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Could Soon Provide Ethics Service To Companies Building AI Solutions</title>
		<link>https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/</link>
					<comments>https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 31 Aug 2020 07:16:16 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[AI solutions]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Google Could]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11328</guid>

					<description><![CDATA[<p>Source: republicworld.com Google could soon offer an ethics consultation service to companies building Artificial Technology solutions. According to a report in Wired, the search giant may soon offer the <a class="read-more-link" href="https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/">Google Could Soon Provide Ethics Service To Companies Building AI Solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: republicworld.com</p>



<p>Google could soon offer an ethics consultation service to companies building Artificial Technology solutions. According to a report in Wired, the search giant may soon offer the service to companies developing AI projects. The report suggests that Google might launch the service before the end of 2020 and initially, it would offer only advice on tasks such as spotting racial bias or creating guidelines to govern the AI projects. The American multinational may in the future also offer other services such as audits of AI systems, etc. </p>



<h3 class="wp-block-heading">Google&#8217;s own ethics-based controversies</h3>



<p>Google&#8217;s plan to offer ethics advice to companies building AI solutions is being mocked even before it could be officially launched and there is a reason for that. Google doesn&#8217;t have a good track record when it comes to the company&#8217;s own ethics policy as it has been mired in several controversies in the past few years.</p>



<p>In 2015, the company had to apologise for a technical glitch in its Photos app that identified Black people as gorillas. In 2018, the company face backlash from its own employees after agreeing to develop surveillance software for the US defence department. </p>



<p>The California-based company in 2018 reportedly tested a secret search engine it had developed for China, an authoritarian country where content is censored and monitored by the State. Company&#8217;s CEO Sundar Pichai had even testified for the same in the US Congress, where he admitted that Google was working on a search engine for China.</p>



<p>Following these controversies, the company issued a set of ethical principal for the use of its AI. Big companies pay Google for cloud computing services and as per reports some of them are already taking ethical help for the firm, which may have possibly prompted the Alphabet Inc-owned to dive into it.&nbsp;</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/">Google Could Soon Provide Ethics Service To Companies Building AI Solutions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-could-soon-provide-ethics-service-to-companies-building-ai-solutions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Cognitive/Artificial Intelligence Systems Market Is Booming Worldwide &#124; IBM, Microsoft, Google</title>
		<link>https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/</link>
					<comments>https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 19 Aug 2020 07:44:06 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Artificial]]></category>
		<category><![CDATA[Cognitive]]></category>
		<category><![CDATA[Global Growth Trends]]></category>
		<category><![CDATA[GMA global research]]></category>
		<category><![CDATA[Intelligence Systems]]></category>
		<category><![CDATA[Primary Research:]]></category>
		<category><![CDATA[Secondary Research]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=11019</guid>

					<description><![CDATA[<p>SOURCE:-scientect A new Research Report published by GMA under the title Global Cognitive/Artificial Intelligence Systems Market (COVID 19 Version) can grow into the world’s most important market <a class="read-more-link" href="https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/">Cognitive/Artificial Intelligence Systems Market Is Booming Worldwide | IBM, Microsoft, Google</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>SOURCE:-scientect</p>



<p>A new Research Report published by GMA under the title Global Cognitive/Artificial Intelligence Systems Market (COVID 19 Version) can grow into the world’s most important market which has played an important role in making progressive impacts on the global economy. The Global Cognitive/Artificial Intelligence Systems Market Report presents a dynamic vision for concluding and researching market size, market hope and competitive environment. The study is derived from primary and secondary Research and consists of qualitative &amp; Quality analysis. The main company in this Research is IBM, Microsoft, Google</p>



<p><strong>Primary Research:</strong></p>



<p>We interviewed various key sources of supply and demand in the course of the Primary Research to obtain qualitative and quantitative information related to this report. Main sources of supply include key industry members, subject matter experts from key companies, and consultants from many major firms and organizations working on the Global Cognitive/Artificial Intelligence Systems Market .</p>



<p><strong>Secondary Research:</strong></p>



<p>Secondary Research was performed to obtain crucial information about the business supply chain, the company currency system, global corporate pools, and sector segmentation, with the lowest point, regional area, and technology-oriented perspectives. Secondary data were collected and analyzed to reach the total size of the market which the first survey confirmed.</p>



<p><strong>Some Key Research Questions &amp; answers:</strong></p>



<p>What Is impact of COVID 19 on Global Cognitive/Artificial Intelligence Systems Market ?<br>Before COVID 19 Global Cognitive/Artificial Intelligence Systems Market Size Was XXX Million $ &amp; After COVID 19 Excepted to Grow at a X% &amp; XXX Million $.<br>Who are the Top Key Players in the Global Cognitive/Artificial Intelligence Systems Market and what are their priorities, strategies &amp; developments?<br>Lists of Competitors in Research is: IBM, Microsoft, Google<br>What are the Types &amp; Applications of the Global Cognitive/Artificial Intelligence Systems Market ?<br>Application’s cover in these Reports Is: Manufacturing, Healthcare, Consumer And Retail, Automotive, BFSI, Aerospace And Defence and Others, ,<br>Types Cover in this Research: Robotics, Consumer Electronics, Drones, Autonomous Cars and Others, ,</p>



<p>All percent shares, breaks, and classifications were determined using the secondary sources and confirmed through the primary sources. All parameters that may affect the market covered in this study have been extensively reviewed, researched through basic investigations, and analyzed to obtain final quantitative and qualitative data. This has been the study of key quantitative and qualitative insights through interviews with industry experts, including CEOs, vice presidents, directors and marketing executives, as well as annual and financial reports from top market participants</p>



<p><strong>Table of Content:</strong></p>



<p><strong>1 Report Summary</strong></p>



<p>1.1 Research Scope<br>1.2 Key Market Segments<br>1.3 Target Player<br>1.4 Market Analysis by Type Robotics, Consumer Electronics, Drones, Autonomous Cars and Others, ,<br>1.5 Market by Application Manufacturing, Healthcare, Consumer And Retail, Automotive, BFSI, Aerospace And Defence and Others, ,<br>1.6 Learning Objectives<br>1.7 years considered</p>



<p><strong>2 Global Growth Trends</strong></p>



<p>2.1 Global Global Cognitive/Artificial Intelligence Systems Market Size<br>2.2 Trends of Global Cognitive/Artificial Intelligence Systems Market Growth by Region<br>2.3 Corporate trende<br>3 Global Cognitive/Artificial Intelligence Systems Market shares by key players<br>3.1 Global Cognitive/Artificial Intelligence Systems Market Size by Manufacturer<br>3.2 Global Cognitive/Artificial Intelligence Systems Market Key players Provide headquarters and local<br>3.3 Major Players Products / Solutions / Services<br>3.4 Enter the Barriers in the Global Cognitive/Artificial Intelligence Systems Market<br>3.5 Mergers, acquisitions and expansion plans</p>



<p>Continue……………………………………..</p>



<p><br>GMA global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the “Accurate Forecast” in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their “Goals &amp; Objectives”.</p>
<p>The post <a href="https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/">Cognitive/Artificial Intelligence Systems Market Is Booming Worldwide | IBM, Microsoft, Google</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/cognitive-artificial-intelligence-systems-market-is-booming-worldwide-ibm-microsoft-google/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
