<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>machine learning (ML) Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/machine-learning-ml/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/machine-learning-ml/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Wed, 14 Aug 2024 06:47:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Artificial Intelligence: Definition and Types of Artificial Intelligence</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Wed, 14 Aug 2024 06:46:58 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[autonomous systems]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Ethical AI]]></category>
		<category><![CDATA[General AI]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[Narrow AI]]></category>
		<category><![CDATA[natural language processing (NLP)]]></category>
		<category><![CDATA[Superintelligent AI]]></category>
		<category><![CDATA[Symbolic AI]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=19040</guid>

					<description><![CDATA[<p>Introduction Artificial Intelligence (AI) is a broad field of computer science focused on creating systems capable of performing tasks that typically require human intelligence. These tasks include <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/">Artificial Intelligence: Definition and Types of Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[


<h2 class="wp-block-heading">Introduction</h2>



<p>Artificial Intelligence (AI) is a broad field of computer science focused on creating systems capable of performing tasks that typically require human intelligence. These tasks include reasoning, learning, problem-solving, perception, and language understanding. AI can be categorized into several types based on its capabilities, functions, and application domains. </p>



<h2 class="wp-block-heading">Types of Artificial Intelligence</h2>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="1024" data-id="19041" src="https://www.aiuniverse.xyz/wp-content/uploads/2024/08/DALL·E-2024-08-14-12.14.20-A-futuristic-landscape-illustrating-three-types-of-artificial-intelligence_-Narrow-AI-represented-by-a-humanoid-robot-analyzing-data-on-multiple-scree.webp" alt="" class="wp-image-19041" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2024/08/DALL·E-2024-08-14-12.14.20-A-futuristic-landscape-illustrating-three-types-of-artificial-intelligence_-Narrow-AI-represented-by-a-humanoid-robot-analyzing-data-on-multiple-scree.webp 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2024/08/DALL·E-2024-08-14-12.14.20-A-futuristic-landscape-illustrating-three-types-of-artificial-intelligence_-Narrow-AI-represented-by-a-humanoid-robot-analyzing-data-on-multiple-scree-300x300.webp 300w, https://www.aiuniverse.xyz/wp-content/uploads/2024/08/DALL·E-2024-08-14-12.14.20-A-futuristic-landscape-illustrating-three-types-of-artificial-intelligence_-Narrow-AI-represented-by-a-humanoid-robot-analyzing-data-on-multiple-scree-150x150.webp 150w, https://www.aiuniverse.xyz/wp-content/uploads/2024/08/DALL·E-2024-08-14-12.14.20-A-futuristic-landscape-illustrating-three-types-of-artificial-intelligence_-Narrow-AI-represented-by-a-humanoid-robot-analyzing-data-on-multiple-scree-768x768.webp 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<h3 class="wp-block-heading">1. <strong>Narrow AI (Weak AI)</strong></h3>



<p><strong>Definition</strong>: Narrow AI, also known as Weak AI, refers to artificial intelligence systems that are specialized and focused on performing a specific task or a set of closely related tasks.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Task-Specific</strong>: Designed to handle specific functions such as image recognition, language translation, or playing a game.</li>



<li><strong>Limited Scope</strong>: Operates within a predefined range and lacks the ability to generalize beyond its programmed tasks.</li>



<li><strong>No Self-Awareness</strong>: Cannot understand or reason outside its specific application.</li>
</ul>



<p><strong>Examples</strong>:</p>



<ul class="wp-block-list">
<li><strong>Voice Assistants</strong>: Siri, Alexa, Google Assistant. They can perform tasks like setting reminders or answering questions but cannot engage in conversations outside their designed capabilities.</li>



<li><strong>Recommendation Systems</strong>: Used by platforms like Netflix or Amazon to suggest products or movies based on user preferences and behavior.</li>



<li><strong>Autonomous Vehicles</strong>: Systems like Tesla’s Autopilot use machine learning to navigate roads but are limited to driving tasks and cannot engage in other activities.</li>
</ul>



<h3 class="wp-block-heading">2. <strong>General AI (Strong AI)</strong></h3>



<p><strong>Definition</strong>: General AI, or Strong AI, refers to an advanced form of AI that has the capability to understand, learn, and apply intelligence across a wide range of tasks, much like a human being. This is still a theoretical concept and has not yet been realized.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Broad Competence</strong>: Capable of performing any intellectual task that a human can.</li>



<li><strong>Contextual Understanding</strong>: Can understand and reason about diverse subjects and contexts.</li>



<li><strong>Adaptability</strong>: Can transfer knowledge from one domain to another and learn new tasks with minimal additional input.</li>
</ul>



<p><strong>Examples</strong>: As of now, there are no existing examples of General AI. It remains a subject of research and speculation, with ongoing debates about its potential development and implications.</p>



<h3 class="wp-block-heading">3. <strong>Superintelligent AI</strong></h3>



<p><strong>Definition</strong>: Superintelligent AI refers to a hypothetical AI that surpasses human intelligence across all fields, including creativity, general wisdom, and problem-solving. This concept is often discussed in the context of long-term future scenarios.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Superior Capability</strong>: Possesses cognitive abilities that are far beyond the best human minds.</li>



<li><strong>Potential Risks</strong>: Raises concerns about control, ethical implications, and the potential impact on society and humanity.</li>



<li><strong>Speculative Nature</strong>: Discussions around Superintelligent AI are largely theoretical and focus on its potential development and consequences.</li>
</ul>



<p><strong>Examples</strong>: No real-world examples exist. Superintelligent AI is often explored in science fiction and theoretical discussions about the future of AI.</p>



<h3 class="wp-block-heading">4. <strong>Reactive Machines</strong></h3>



<p><strong>Definition</strong>: Reactive machines are basic AI systems that operate purely on the present input without the ability to form memories or use past experiences.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Immediate Response</strong>: Reacts to specific inputs with predefined responses.</li>



<li><strong>No Learning</strong>: Does not learn from past interactions or experiences.</li>



<li><strong>Simple Design</strong>: Often simpler in design and implementation compared to more advanced AI systems.</li>
</ul>



<p><strong>Examples</strong>:</p>



<ul class="wp-block-list">
<li><strong>IBM’s Deep Blue</strong>: A chess-playing AI that defeated grandmaster Garry Kasparov. It used predefined strategies and calculations without learning from previous games.</li>



<li><strong>Basic Chatbots</strong>: Simple bots that provide scripted responses based on keywords or phrases.</li>
</ul>



<h3 class="wp-block-heading">5. <strong>Limited Memory AI</strong></h3>



<p><strong>Definition</strong>: Limited memory AI systems have the ability to use past experiences to improve their performance and make better decisions over time. They can retain and learn from data but only within a specific context.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Experience-Based Learning</strong>: Uses historical data to inform current decision-making.</li>



<li><strong>Contextual Memory</strong>: Can remember and use past interactions within a specific domain.</li>



<li><strong>Adaptive</strong>: Capable of improving performance as more data becomes available.</li>
</ul>



<p><strong>Examples</strong>:</p>



<ul class="wp-block-list">
<li><strong>Self-Driving Cars</strong>: Utilize past driving data to make decisions about navigation and obstacle avoidance.</li>



<li><strong>Fraud Detection Systems</strong>: Learn from historical transaction data to identify patterns indicative of fraudulent behavior.</li>
</ul>



<h3 class="wp-block-heading">6. <strong>Theory of Mind AI</strong></h3>



<p><strong>Definition</strong>: Theory of Mind AI aims to develop systems that can understand and simulate human emotions, beliefs, intentions, and mental states. This type of AI is still in the research phase.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Emotional Understanding</strong>: Able to recognize and respond to human emotions and intentions.</li>



<li><strong>Advanced Interaction</strong>: Facilitates more natural and intuitive interactions between humans and machines.</li>



<li><strong>Research Focus</strong>: Involves ongoing research to achieve a deeper level of human-like understanding.</li>
</ul>



<p><strong>Examples</strong>: No existing examples; the development of Theory of Mind AI is a goal for future AI advancements.</p>



<h3 class="wp-block-heading">7. <strong>Self-Aware AI</strong></h3>



<p><strong>Definition</strong>: Self-Aware AI refers to AI that has a sense of self and consciousness, including awareness of its own internal states and the ability to reflect on its actions and existence.</p>



<p><strong>Characteristics</strong>:</p>



<ul class="wp-block-list">
<li><strong>Self-Recognition</strong>: Has an awareness of its own state and existence.</li>



<li><strong>Reflective</strong>: Capable of introspection and understanding its role and impact.</li>



<li><strong>Ethical and Philosophical Implications</strong>: Raises profound questions about the nature of consciousness and the rights of AI.</li>
</ul>



<p><strong>Examples</strong>: No current examples; self-aware AI remains a theoretical concept and is the subject of philosophical and ethical discussions.</p>



<p>Each of these types represents a different level of complexity and capability in AI. The field is rapidly evolving, and future advancements may lead to new forms of AI or refined classifications.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/">Artificial Intelligence: Definition and Types of Artificial Intelligence</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-definition-and-types-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is a Chatbot and what are the Types of Chatbots ?</title>
		<link>https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/</link>
					<comments>https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/#respond</comments>
		
		<dc:creator><![CDATA[Maruti Kr.]]></dc:creator>
		<pubDate>Sat, 06 May 2023 11:32:05 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Application Programming Interface (API)]]></category>
		<category><![CDATA[Benefits of Chatbots]]></category>
		<category><![CDATA[Can chatbots replace human customer service representatives?]]></category>
		<category><![CDATA[Challenges and Limitations of Chatbots]]></category>
		<category><![CDATA[Future of Chatbots]]></category>
		<category><![CDATA[History of Chatbots]]></category>
		<category><![CDATA[How Chatbots Work]]></category>
		<category><![CDATA[How do I choose the right chatbot for my business?]]></category>
		<category><![CDATA[Introduction to Chatbots]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[natural language processing (NLP)]]></category>
		<category><![CDATA[Types of Chatbots]]></category>
		<category><![CDATA[What is a Chatbot?]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=16772</guid>

					<description><![CDATA[<p>Introduction to Chatbots If you’ve ever chatted with a customer service representative on a company’s website or Facebook page, you may have been talking to a chatbot <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/">What is a Chatbot and what are the Types of Chatbots ?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="538" src="https://www.aiuniverse.xyz/wp-content/uploads/2023/05/What-is-Chatbot--1024x538.jpg" alt="" class="wp-image-16773" srcset="https://www.aiuniverse.xyz/wp-content/uploads/2023/05/What-is-Chatbot--1024x538.jpg 1024w, https://www.aiuniverse.xyz/wp-content/uploads/2023/05/What-is-Chatbot--300x158.jpg 300w, https://www.aiuniverse.xyz/wp-content/uploads/2023/05/What-is-Chatbot--768x403.jpg 768w, https://www.aiuniverse.xyz/wp-content/uploads/2023/05/What-is-Chatbot-.jpg 1200w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Introduction to Chatbots</h2>



<p>If you’ve ever chatted with a customer service representative on a company’s website or Facebook page, you may have been talking to a chatbot without even realizing it. Chatbots are becoming increasingly popular in business and have become a valuable tool for companies all over the world. In this article, we’ll explore what chatbots are, their history, and how they work.</p>



<h3 class="wp-block-heading">What is a Chatbot?</h3>



<p>A chatbot is a software program designed to mimic human conversation through text or voice interaction. Chatbots are programmed to respond to specific prompts and customer inquiries, offering a quick, automated response to customers. They can be used for a variety of purposes, such as customer service, sales, and marketing.</p>



<h3 class="wp-block-heading">History of Chatbots</h3>



<p>Chatbots have been in use since the early 1960s, but they really gained popularity in the mid-2000s with the rise of social media and messaging apps. One of the earliest chatbots was ELIZA, which was created in 1966 and mimicked a psychotherapist. In recent years, advances in artificial intelligence and natural language processing have allowed chatbots to become more sophisticated and realistic.</p>



<h2 class="wp-block-heading">Types of Chatbots</h2>



<p>There are three main types of chatbots: rule-based, AI-powered, and hybrid.</p>



<h3 class="wp-block-heading">Rule-Based Chatbots</h3>



<p>Rule-based chatbots are the simplest type of chatbot. They work based on a set of predefined rules and are only able to respond to specific keywords or phrases. Rule-based chatbots are typically used in situations where the conversation is predictable or the responses can be predetermined.</p>



<h3 class="wp-block-heading">AI-Powered Chatbots</h3>



<p>AI-powered chatbots are more advanced and use natural language processing and machine learning to understand and respond to customer inquiries. These chatbots are able to learn from customer interactions and improve their responses over time, making them more effective and efficient.</p>



<h3 class="wp-block-heading">Hybrid Chatbots</h3>



<p>Hybrid chatbots combine the best features of both rule-based and AI-powered chatbots. They use a set of predefined rules to handle simple inquiries and switch to artificial intelligence when handling more complex customer interactions.</p>



<h2 class="wp-block-heading">How Chatbots Work</h2>



<p>Chatbots work by using a combination of natural language processing, machine learning, and application programming interfaces (APIs).</p>



<h3 class="wp-block-heading">Natural Language Processing (NLP)</h3>



<p>NLP is the ability of a computer to understand human language and to interpret and respond to customer inquiries. NLP allows chatbots to understand what customers are asking for and respond accordingly.</p>



<h3 class="wp-block-heading">Machine Learning (ML)</h3>



<p>Machine learning is a type of AI that allows chatbots to learn from customer interactions and improve their responses over time. This means that chatbots become more effective and efficient with every interaction.</p>



<h3 class="wp-block-heading">Application Programming Interface (API)</h3>



<p>APIs allow chatbots to connect with other software applications and systems. This means that chatbots can access information from other systems, such as customer databases, to provide more accurate responses to customer inquiries.</p>



<h2 class="wp-block-heading">Benefits of Chatbots</h2>



<p>There are several benefits to using chatbots in business.</p>



<h3 class="wp-block-heading">24/7 Availability</h3>



<p>Chatbots can be available to customers 24/7, which means that customers can get help and support at any time of the day or night.</p>



<h3 class="wp-block-heading">Cost Savings</h3>



<p>Chatbots can save companies money by reducing the need for human customer service representatives. Chatbots can handle a large volume of inquiries at once, which means that companies can reduce staffing costs.</p>



<h3 class="wp-block-heading">Improved Customer Experience</h3>



<p>Chatbots can provide fast and accurate responses to customer inquiries, which can improve the overall customer experience. Customers don’t have to wait on hold or wait for an email response, which can increase customer satisfaction.</p>



<h2 class="wp-block-heading">Use Cases of Chatbots</h2>



<p>Chatbots have become an essential part of businesses in various industries. Here are some of the most common use cases of chatbots:</p>



<h3 class="wp-block-heading">Customer Service</h3>



<p>Chatbots are a lifesaver for customer support teams as they can handle repetitive tasks such as answering common queries and providing basic information about products or services. This allows customer support teams to focus on more complex issues that require human attention.</p>



<h3 class="wp-block-heading">E-commerce</h3>



<p>Chatbots can also be used to enhance the shopping experience of customers on e-commerce websites. They can assist customers in finding products, suggest similar products, process payments and even track deliveries.</p>



<h3 class="wp-block-heading">Healthcare</h3>



<p>Chatbots can help healthcare providers offer better services to their patients by scheduling appointments, providing health advice, monitoring patient’s health, and even initiating emergency services if required.</p>



<h2 class="wp-block-heading">Challenges and Limitations of Chatbots</h2>



<p>As promising as chatbots can be, there are still a few challenges and limitations that businesses need to keep in mind. Here are some of them:</p>



<h3 class="wp-block-heading">Accuracy and Reliability</h3>



<p>Chatbots rely on artificial intelligence which means they need to constantly learn and improve. However, they may not always provide accurate or reliable information to users. This can lead to confusion and even frustration among customers.</p>



<h3 class="wp-block-heading">Human-like Conversations</h3>



<p>Chatbots are programmed to mimic human-like conversation but they still lack the rich nuances of human communication. They might not be able to understand sarcasm or interpret emotions, leading to miscommunication with users.</p>



<h3 class="wp-block-heading">Data Privacy and Security</h3>



<p>Chatbots store and process sensitive data such as customer details, payment information, and healthcare records. Therefore, it is important to ensure that chatbots are equipped with the proper security measures to keep this information safe and secure.</p>



<h2 class="wp-block-heading">Future of Chatbots</h2>



<p>Chatbots are constantly evolving and improving. Here are some of the potential future developments we can expect:</p>



<h3 class="wp-block-heading">Integration with Voice Assistants</h3>



<p>Chatbots can be integrated with voice assistants such as Alexa or Google Assistant to provide a more seamless and natural experience to users. This will enable users to interact with chatbots using voice commands instead of typing.</p>



<h3 class="wp-block-heading">Improved Personalization</h3>



<p>Chatbots can become even more effective if they can personalize their interactions based on a user’s preferences and previous interactions. This will make the chatbot experience more natural and engaging for users.</p>



<h2 class="wp-block-heading">Choosing the Right Chatbot for Your Business</h2>



<p>Choosing the right chatbot for your business can be a challenging task. Here are some factors to consider:</p>



<h3 class="wp-block-heading">Business Objectives</h3>



<p>Choose a chatbot that aligns with your business objectives. If you’re a customer support-heavy business, prioritize a chatbot that can handle customer queries effectively.</p>



<h3 class="wp-block-heading">Chatbot Features and Functionality</h3>



<p>Different chatbots have different features and functionalities. Choose a chatbot that suits your specific needs. For example, if you’re an e-commerce business, you might want a chatbot that can process payments and track deliveries.In conclusion, chatbots are transforming the way businesses interact with their customers, providing a faster, more personalized, and efficient experience. As chatbot technology continues to evolve, we can expect to see more advanced and sophisticated chatbots taking over a wider range of functions. By understanding the types, benefits, and limitations of chatbots, businesses can make informed decisions about whether and how to implement them. With the right chatbot in place, businesses can improve customer satisfaction, reduce costs, and gain a competitive edge in the market.</p>



<h2 class="wp-block-heading">FAQ</h2>



<h3 class="wp-block-heading">What is the difference between a rule-based and AI-powered chatbot?</h3>



<p>A rule-based chatbot follows a predefined set of rules and responds to user inputs based on keywords and patterns. An AI-powered chatbot uses machine learning algorithms and natural language processing to understand user inputs and provide more personalized responses.</p>



<h3 class="wp-block-heading">Are chatbots secure?</h3>



<p>Chatbots can pose security risks if they are not developed and implemented properly. It is important to ensure that chatbots are built with security features such as encryption and authentication, and that they comply with data privacy regulations.</p>



<h3 class="wp-block-heading">Can chatbots replace human customer service representatives?</h3>



<p>While chatbots can handle a range of customer queries and tasks, they are not intended to replace human customer service representatives entirely. Businesses should use chatbots as a complement to human support, allowing chatbots to handle routine and repetitive tasks while human representatives focus on more complex and nuanced interactions with customers.</p>



<h3 class="wp-block-heading">How do I choose the right chatbot for my business?</h3>



<p>Choosing the right chatbot for your business requires careful consideration of your business objectives, target audience, and desired functionality. You should also evaluate the chatbot&#8217;s technology, reliability, scalability, and integration capabilities. It may be helpful to consult with chatbot experts and conduct user testing before making a final decision.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/">What is a Chatbot and what are the Types of Chatbots ?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-a-chatbot-and-what-are-the-types-of-chatbots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ARTIFICIAL INTELLIGENCE CAN BE EXPLOITED TO HACK CONNECTED VEHICLES</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 16 Dec 2020 06:12:31 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[autonomous]]></category>
		<category><![CDATA[Cyber-attacks]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[vehicles]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=12434</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net AI and ML can be used to conduct Cyber-attacks against Autonomous Cars Innovative automakers, software developers and tech companies are transforming the automotive industry. Today, drivers <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/">ARTIFICIAL INTELLIGENCE CAN BE EXPLOITED TO HACK CONNECTED VEHICLES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<h3 class="wp-block-heading">AI and ML can be used to conduct Cyber-attacks against Autonomous Cars</h3>



<p>Innovative automakers, software developers and tech companies are transforming the automotive industry. Today, drivers enjoy enhanced entertainment, information options and connection with the outer world. As cars move toward more autonomous capabilities, the stakes are increasing in terms of security. As per a report by the UN, Europol and cybersecurity company Trend Micro, cyber-criminals could exploit disruptive technologies, including artificial intelligence (AI) and machine learning (ML) to conduct attacks against autonomous cars, drones and IoT-connected vehicles.</p>



<p>The rapid increase in these technologies inevitably creates a rich target for hackers looking to get access to personal information and control the essential automotive functions and features. The possibility to access information on driver habits for both commercial and criminal purposes, without knowledge and consent, means attitudes towards prevention, understanding and response to potential cyber-attacks require changing.</p>



<p>For instance, stealing personally identifiable information comes into sharper focus when considering virtually all new vehicles on the road today come with embedded, tethered or smartphone mirroring capabilities. Geolocation, personal trip history, and financial details are some examples of personal information that can potentially be stolen through a vehicle’s system using AI and ML.</p>



<h4 class="wp-block-heading"><strong>How Cybercriminals Attack Connected Vehicles</strong></h4>



<p>Cybercriminals could conduct attacks abusing machine learning. The technologies are evolving so fast that today autonomous vehicles have ML implemented in them to recognise the environment around them and obstacles like pedestrians must be avoided.</p>



<p>However, these algorithms are still evolving, and hackers could exploit them for malicious purposes, to aid crime or create chaos. For instance, AI systems that manage autonomous vehicles and regular traffic could be manipulated by cybercriminals if they gain access to the networks that control them.</p>



<p>Understanding the threats to connected cars requires knowledge of what cybercriminals are trying to achieve. Hackers will try out different kinds of attacks to achieve unique goals. The most dangerous objective might be to bypass controls in crucial safety systems like steering, brakes and transmission. But cybercriminals might also be interested in obtaining valuable pieces of data that are managed within the car software like personal details and performance statistics. Wherein data can be protected with cryptography, this only shifts the problems from preventing data directly to protecting the cryptographic keys.</p>



<p>If the cybercriminal is trying to steal sensitive data like cryptographic keys, they have to know where to search for them. It usually involves a plethora of reverse-engineering techniques. For instance, the hacker might introduce faults into the compiled code to see how it breaks. Or the individual might look for a string corresponding to an error message related to ‘engine failure’ or ‘anti-lock brake system disabled,’ and trace where that string is used. The individual leverages sophisticated AI techniques to understand the overall structure of the code, where the functions are located.</p>



<p>On the other side, physical access to a device means bad actors can tamper with the application itself. The way this is often done is by making one small change to the application code so it can be bypassed in any number of ways, generally at the assembly language level like inverting the logic of a conditional jump, replacing the test with a tautology or changing function calls to those of the attacker’s own design.</p>



<p>It’s not just road vehicles that cybercriminals could hack by exploiting new technologies such as AI and ML algorithms and increased connectivity; there’s the potential for attackers to abuse machine learning to impact airspace too. Attackers might also consider autonomous drones because they have the potential to carry ‘interesting’ payloads like intellectual property.</p>



<p>Hacking autonomous drones also provide cybercriminals with a potentially easy route to making money by hijacking delivery drones used by retailers and redirecting them to a new location- taking the package and selling it on them.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/">ARTIFICIAL INTELLIGENCE CAN BE EXPLOITED TO HACK CONNECTED VEHICLES</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-can-be-exploited-to-hack-connected-vehicles/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Industrial robots are dominating — but are they safe from cyber-attacks?</title>
		<link>https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/</link>
					<comments>https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 11 Aug 2020 09:04:54 +0000</pubDate>
				<category><![CDATA[Robotics]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=10809</guid>

					<description><![CDATA[<p>Source: techhq.com The pandemic has repeatedly reaffirmed our needs for robots. The time has come for industrial robots to take over factory floors and showcase the suite <a class="read-more-link" href="https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/">Industrial robots are dominating — but are they safe from cyber-attacks?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: techhq.com</p>



<p>The pandemic has repeatedly reaffirmed our needs for robots. The time has come for industrial robots to take over factory floors and showcase the suite of benefits they bring to manufacturing.</p>



<p>Robots are generally known to automate repetitive tasks and free up valuable time for their human colleagues to take on more complex and creative tasks; the current social distancing measures have built a stronger case as to why we need robots. </p>



<p>Industrial robots have a long legacy of assembling everything from heavy automobiles, airplanes, electrical appliances, and are now even bring developed for more domestic tasks such as sorting out your trash.</p>



<p>Globally, robots have demonstrated remarkable versatility and strength in taking over human labor with consistent speed and precision. This highly efficient employee has won over factory owners. The global industrial robot market size is predicted to hit US$66.48 billion by 2027, exhibiting a CAGR of 15.1% during the forecast period, states Fortune Business Insights.</p>



<p>Although there is a phenomenal growth in industrial robots, a new report titled Rogue Automation by Trend Micro Research found that some robots have existing flaws that make them susceptible to cyber-attacks. </p>



<p>The research paper aims to “reveal previously unknown design flaws that malicious actors could exploit to hide malicious functionalities in industrial robots and other automated, programmable manufacturing machines.”</p>



<p>Since robots are generally connected to networks and programmed via software, they could potentially pose as entry points for bad actors. The report listed several real-life examples of flaws found in the software produced and distributed by Swiss-Swedish multinational corporation ABB, one of the world’s largest industrial robot producers. Researchers also spotted vulnerabilities in the popular open-source software named “Robot Operating System Industrial” or ROS-I.&nbsp;&nbsp;</p>



<p>Researchers discovered vulnerabilities in an app written in ABB’s proprietary programming language and used to automate industrial machines. The discovered flaw is the very tool that hackers can leverage on and gain access to networks, exfiltrating valuable files, and sensitive data.</p>



<p>“Industrial secrets are traded for very high prices in underground marketplaces and have become one of the main targets of cyberwarfare operations,” the study noted.&nbsp;</p>



<p>The research also found a vulnerability that attackers can exploit to interfere with a robot’s movements via a network. By spoofing (an unknown source disguising as a known, trusted source to communicate) network packets, attackers can cause unintended movements or interrupt existing flows of set procedure, but adequately configured safety systems could make it challenging for hackers to succeed. This vulnerability found in a ROS-I’s software component was written for Kuka and ABB robots.&nbsp;</p>



<p>The report clarified that appropriate measures were taken to deal with the discovered vulnerability.<strong> “</strong>One was removed by the vendor (ABB) upon our responsible disclosure. The other vulnerabilities fostered a fruitful conversation with ROS-Industrial, which led to the development of some of the mitigation recommendations described,” as written in the report.</p>



<p>Robotics are continuing to show their worth on the factory floors, and while they’ve been a fixture in many industries such as car manufacturing for decades, they are becoming increasingly advanced and versatile. Artificial intelligence (AI), machine learning (ML), cloud, and 5G are fueling the evolution of highly automated and increasingly intelligent industrial robots. </p>



<p>The International Federation of Robotics estimates that by 2022, we will see close to 4 million industrial robots in factories worldwide. At the same time, the intricately connected networks between machines and systems are susceptible to the growing scale and robustness of cyberattacks.</p>



<p>Dr. Nicholas Patterson, a cybersecurity lecturer at Deakin University, commented that the security risks are not limited to industrial robots but also home-based robots such as robotic vacuum cleaners and drones.</p>
<p>The post <a href="https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/">Industrial robots are dominating — but are they safe from cyber-attacks?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/industrial-robots-are-dominating-but-are-they-safe-from-cyber-attacks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND THE FUTURE OF GRAPHS</title>
		<link>https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/</link>
					<comments>https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 05 Jun 2020 06:55:30 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Future]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9289</guid>

					<description><![CDATA[<p>Source: healthcareitnews.com I am a skeptic of machine learning. There, I&#8217;ve said it. I say this not because I don&#8217;t think that machine learning is a poor <a class="read-more-link" href="https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/">ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND THE FUTURE OF GRAPHS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: healthcareitnews.com</p>



<p>I am a skeptic of machine learning. There, I&#8217;ve said it. I say this not because I don&#8217;t think that machine learning is a poor technology &#8211; it&#8217;s actually quite powerful for what it does &#8211; but because machine-learning by itself is only half a solution.</p>



<p>To explain this (and the relationship that graphs have to machine learning and AI), it&#8217;s worth spending a bit of time exploring what exactly machine learning does, how it works. Machine learning isn&#8217;t actually one particular algorithm or piece of software, but rather the use of statistical algorithms to analyze large amounts of data and from that construct a model that can, at a minimum, classify the data consistently. If it&#8217;s done right, the reasoning goes, it should then be possible to use that model to classify new information so that it&#8217;s consistent with what&#8217;s already known.</p>



<p>Many such systems make use of clustering algorithms &#8211; they take a look at data as vectors that can be described in an n-dimensional space. That is to say, there are n different facets that describe a particular thing, such as a thing&#8217;s color, shape (morphology), size, texture, and so forth. Some of these attributes can be identified by a single binary (does the thing have a tail or not), but in most cases the attributes usually range along a spectrum, such as &#8220;does the thing have an an exclusively protein-based diet (an obligate carnivore) or does its does consist of a certain percentage of grains or other plants?&#8221;. In either case, this means that it is possible to use the attribute as a means to create a number between zero and one (what mathematicians would refer to as a normal orthogonal vector).</p>



<p>Orthogonality is an interesting concept. In mathematics, two vectors are considered orthogonal if there exists some coordinate system in which you cannot express any information about one vector using the other. For instance, if two vectors are at right angles to one another, then there is one coordinate system where one vector aligns with the x-axis and the other with the y-axis. I cannot express any part of the length of a vector along the y axis by multiplying the length of the vector on the x-axis. In this case they are independent of one another.</p>



<p>This independence is important. Mathematically, there is no correlation between the two vectors &#8211; they represent different things, and changing one vector tells me nothing about any other vector. When vectors are not orthogonal, one bleeds a bit (or more than a bit) into another. One two vectors are parallel to one another, they are fully correlated &#8211; one vector can be expressed as a multiple of the other. A vector in two dimensions can always be expressed as the &#8220;sum&#8221; of two orthogonal vectors, a vector in three dimensions, can always be expressed as the &#8220;sum&#8221; of three orthogonal vectors and so forth.</p>



<p>If you can express a thing as a vector consisting of weighted values, this creates a space where related things will generally be near one another in an n-dimensional space. Cats, dogs, and bears are all carnivores, so in a model describing animals, they will tend to be clustered in a different group than rabbits, voles, and squirrels based upon their dietary habits. At the same time cats,, dogs and bears will each tend to cluster in different groups based upon size as even a small adult bear will always be larger than the largest cat and almost all dogs. In a two dimensional space, it becomes possible to carve out a region where you have large carnivores, medium-sized carnivores, small carnivores, large herbivores and so forth.</p>



<p>Machine learning (at its simplest) would recognize that when you have a large carnivore, given a minimal dataset, you&#8217;re likely to classify that as a bear, because based upon the two vectors size and diet every time you are at the upper end of the vectors for those two values, everything you&#8217;ve already seen (your training set) is a bear, while no vectors outside of this range are classified in this way.</p>



<p>A predictive model with only two independent vectors is going to be pretty useless as a classifier for more than a small set of items. A fox and a dog will be indistinguishable in this model, and for that matter, a small dog such as a Shitsu vs. a Maine Coon cat will confuse the heck out of such a classifier. On the flip side, the more variables that you add, the harder it is to ensure orthogonality, and the more difficult it then becomes determine what exactly is the determining factor(s) for classification, and consequently increasing the chances of misclassification. A panda bear is, anatomically and genetically, a bear. Yet because of a chance genetic mutation it is only able to reasonably digest bamboo, making it a herbivore.</p>



<p>You&#8217;d need to go to a very fine-grained classifier, one capable of identifying genomic structures, to identify a panda as a bear. The problem here is not in the mathematics but in the categorization itself. Categorizations are ultimately linguistic structures. Normalization functions are themselves arbitrary, and how you normalize will ultimately impact the kind of clustering that forms. When the number of dimensions in the model (even assuming that they are independent, which gets harder to determine with more variables) gets too large, then the size of hulls for clustering becomes too small, and interpreting what those hulls actually significant become too complex.</p>



<p>This is one reason that I&#8217;m always dubious when I hear about machine learning models that have thousands or even millions of dimensions. As with attempting to do linear regressions on curves, there are typically only a handful of parameters that typically drive most of the significant curve fitting, which is ultimately just looking for adequate clustering to identify meaningful patterns &#8211; and typically once these patterns are identified, then they are encoded and indexed.</p>



<p>Facial recognition, for instance, is considered a branch of machine learning, but for the most part it works because human faces exist within a skeletal structure that limits the variations of light and dark patterns of the face. This makes it easy to identify the ratios involved between eyes, nose, and mouth, chin and cheekbones, hairlines and other clues, and from that reduce this information to a graph in which the edges reflect relative distances between those parts. This can, in turn, be hashed as a unique number, in essence encoding a face as a graph in a database. Note this pattern. Because the geometry is consistent, rotating a set of vectors to present a consistent pattern is relatively simple (especially for modern GPUs).</p>



<p>Facial recognition then works primarily due to the ability to hash (and consequently compare) graphs in databases. This is the same way that most biometric scans work, taking a large enough sample of datapoints from unique images to encode ratios, then using the corresponding key to retrieve previously encoded graphs. Significantly, there&#8217;s usually very little actual classification going on here, save perhaps in using courser meshes to reduce the overall dataset being queried. Indeed, the real speed ultimately is a function of indexing.</p>



<p>This is where the world of machine learning collides with that of graphs. I&#8217;m going to make an assertion here, one that might get me into trouble with some readers. Right now there&#8217;s a lot of argument about the benefits and drawbacks of property graphs vs. knowledge graphs. I contend that this argument is moot &#8211; it&#8217;s a discussion about optimization strategies, and the sooner that we get past that argument, the sooner that graphs will make their way into the mainstream.</p>



<p>Ultimately, we need to recognize that the principal value of a graph is to index information so that it does not need to be recalculated. One way to do this is to use machine learning to classify, and semantics to bind that classification to the corresponding resource (as well as to the classifier as an additional resource). If I have a phrase that describes a drink as being nutty or fruity, then these should be identified as classifications that apply to drinks (specifically to coffees, teas or wines). If I come across flavors such as hazelnut, cashew or almond, then these should be correlated with nuttiness, and again stored in a semantic graph.</p>



<p>The reason for this is simple &#8211; machine learning without memory is pointless and expensive. Machine learning is fast facing a crisis in that it requires a lot of cycles to train, classify and report. Tie machine learning into a knowledge graph, and you don&#8217;t have to relearn all the time, and you can also reduce the overall computational costs dramatically. Furthermore, you can make use of inferencing, which are rules that can make use of generalization and faceting in ways that are difficult to pull off in a relational data system. Something is bear-like if it is large, has thick fur, does not have opposable thumbs, has a muzzle, is capable of extended bipedal movement and is omnivorous.</p>



<p>What&#8217;s more, the heuristic itself is a graph, and as such is a resource that can be referenced. This is something that most people fail to understand about both SPARQL and SHACL. They are each essentially syntactic sugar on top of graph templates. They can be analyzed, encoded and referenced. When a new resource is added into a graph, the ingestion process can and should run against such templates to see if they match, then insert or delete corresponding additional metadata as the data is folded in.</p>



<p>Additionally, one of those pieces of metadata may very well end up being an identifier for the heuristic itself, creating what&#8217;s often termed a reverse query. Reverse queries are significant because they make it possible to determine which family of classifiers was used to make decisions about how an entity is classified, and from that ascertain the reasons why a given entity was classified a certain way in the first place.</p>



<p>This gets back to one of the biggest challenges seen in both AI and machine learning &#8211; understanding why a given resource was classified. When you have potentially thousands of facets that may have potentially been responsible for a given classification, the ability to see causal chains can go a long way towards making such a classification system repeatable and determining whether the reason for a given classification was legitimate or an artifact of the data collection process. This is not something that AI by itself is very good at, because it&#8217;s a contextual problem. In effect, semantic graphs (and graphs in general) provide a way of making recommendations self-documenting, and hence making it easier to trust the results of AI algorithms.</p>



<p>One of the next major innovations that I see in graph technology is actually a mathematical change. Most graphs that exist right now can be thought of as collections of fixed vectors, entities connected by properties with fixed values. However, it is possible (especially when using property graphs) to create properties that are essentially parameterized over time (or other variables) or that may be passed as functional results from inbound edges. This is, in fact, an alternative approach to describing neural networks (both physical and artificial), and it has the effect of being able to make inferences based upon changing conditions over time.</p>



<p>This approach can be seen as one form of modeling everything from the likelihood of events happening given other events (Bayesian trees) or modeling complex cost-benefit relationships. This can be facilitated even today with some work, but the real value will come with standardization, as such graphs (especially when they are closed network circuits) can in fact act as trainable neuron circuits.</p>



<p>It is also likely that graphs will play a central role in Smart Contracts, &#8220;documents&#8221; that not only specify partners and conditions but also can update themselves transactional, can trigger events and can spawn other contracts and actions. These do not specifically fall within the mandate of &#8220;artificial intelligence&#8221; per se, but the impact that smart contracts play in business and society, in general, will be transformative at the very least.</p>



<p>It&#8217;s unlikely that this is the last chapter on graphs, either (though it is the last in the series about the State of the Graph). Graphs, ultimately, are about connections and context. How do things relate to one another? How are they connected? What do people know, and how do they know them. They underlie contracts and news, research and entertainment, history and how the future is shaped. Graphs promise a means of generating knowledge, creating new models, and even learning. They remind us that, even as forces try to push us apart, we are all ultimately only a few hops from one another in many, many ways.</p>
<p>The post <a href="https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/">ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND THE FUTURE OF GRAPHS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/artificial-intelligence-machine-learning-and-the-future-of-graphs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Would Be the Impact of Artificial Intelligence on Accounting</title>
		<link>https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/</link>
					<comments>https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 28 May 2020 09:31:34 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[Robots]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9094</guid>

					<description><![CDATA[<p>Source: it.toolbox.com Not that long ago it was thought that artificial intelligence (AI), robots, and machine learning (ML) were things found only in science fiction movies. Today <a class="read-more-link" href="https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/">What Would Be the Impact of Artificial Intelligence on Accounting</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: it.toolbox.com</p>



<p>Not that long ago it was thought that artificial intelligence (AI), robots, and machine learning (ML) were things found only in science fiction movies. Today this form of technology takes the center stage around the globe in workplaces.</p>



<p>Basically, AI technology is a sort of smart machine, capable of accomplishing routine, boring activities at a fraction of the time and with greater precision. Machine Learning advent now enables AI systems to track, evaluate, and self-learn data and processes in order to enhance their efficiency and accuracy over time.</p>



<p>The cutting-edge technology and automation would have an advantage for accountants, too. It is already being deployed in the industries on multiple fronts, and will only increase the number of applications.</p>



<h3 class="wp-block-heading">AI Will Help in Transforming Accountants</h3>



<p><br>There is no doubt that AI can perform many of the traditional accounting tasks faster and more effectively than humans and such capabilities will keep growing with the time, but this does not mean the end for accountants. On the other hand, there will always be a need for the human element or human intelligence in each industry.</p>



<p>Accountants need not worry about replacing their job with AI any time in the near future. Companies will still need accountants capable of analyzing and interpreting AI data and providing consultancy services. AI technology should change the duties that an accountant performs, rather than eliminating the position of an accountant.</p>



<p><strong>How Artificial Intelligence Is Helpful in the Accounting Field</strong></p>



<p>AI is already present in accounting and continues to simplify and reduce dependency on manual data entry. AI technology has started being used in a number of benefits and its results are wonderful.</p>



<p><strong>Scaling up data analysis, quantity and quality:</strong>&nbsp;AI can process huge amounts of data (structured and unstructured), and boost the scale, scope, and rigor of the analysis.&nbsp;AI can literally analyze all of the transactions available.</p>



<p><strong>Improving observation and recognition capabilities:&nbsp;</strong>AI can gain information, identify weak signals, and detect more complex patterns than humans can do.</p>



<p><strong>Thorough cognitive capacity:&nbsp;</strong>AI can learn from mistakes or new cases automatically and immediately using feedback loops, and becomes gradually smarter over time. It never forgets, builds on, and deepens on the corporate memory.</p>



<p><strong>Improving consistency:&nbsp;</strong>AI can make decisions that are far more consistent. Robots are not distracted, fatigued, irritated, moody, tired, angry, hungry, thirsty, or sickened. Machines are not affected by cycles or fluctuations in such biological or physiological states as humans. They also do not take holidays or a leave of absence.</p>



<p><strong>Mitigating repetitive tasks:</strong>&nbsp;Instead of spending too much time on tedious activities like data analysis and manual examination processes, accountants should focus their attention on all the work that requires a human touch. Accounting errors may go unnoticed in a typical bookkeeping environment. AI will automatically detect errors and ensure your books are always correct.</p>



<p><strong>Faster clearance of invoices:</strong>&nbsp;It can be difficult to deal with payments from several invoices. Machine learning helps AI to analyze the data and clear up or create new invoices.</p>



<p>AI technology evolution will also enable accountants to be &#8220;tech-savvy&#8221; with very human skills — the ones unattainable by machines — such as storytelling, successful communication, and building relationships.</p>



<p>The accounting industry is changing, and practitioners need to adapt and learn how to respond effectively to those changes. While AI is a wonderful technology and we often imagine machines to replace humans, we can’t underestimate the value of pure human skills such as excitement, imagination, or empathy, and all of them are every profession&#8217;s important facets.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/">What Would Be the Impact of Artificial Intelligence on Accounting</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-would-be-the-impact-of-artificial-intelligence-on-accounting/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Machine learning should increase human possibilities</title>
		<link>https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/</link>
					<comments>https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 22 May 2020 06:54:33 +0000</pubDate>
				<category><![CDATA[Human Intelligence]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8955</guid>

					<description><![CDATA[<p>Source: socialeurope.eu Butollo: Artificial intelligence is said to deliver answers on questions such as the right levels of taxation, reasonable urban planning, the management of companies and the <a class="read-more-link" href="https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/">Machine learning should increase human possibilities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: socialeurope.eu</p>



<p><strong>Butollo</strong>: Artificial intelligence is said to deliver answers on questions such as the right levels of taxation, reasonable urban planning, the management of companies and the assessment of job candidates. Are the abilities of AI to predict and judge better than those of humans? Does the availability of huge amounts of data mean that the world becomes more predictable?</p>



<p><strong>Esposito</strong>: Algorithms can process incomparably more data and perform certain tasks more accurately and reliably than human beings. This is a great advantage that we must keep in mind also when we highlight their limits, which are there and are fundamental. The most obvious is the tendency of algorithms, which learn from available data, to predict the future by projecting forward the structures of the present—including biases and imbalances.</p>



<p>This also produces problems like overfitting, which arises when the system is overly adapted to the examples from the past and loses the ability to capture the empirical variety of the world. For example, it learned so well to interact with the right-handed users it has been trained with that it does not recognise a left-handed person as a possible user.</p>



<p>Algorithms also suffer a specific blindness, especially with regard to the circularity by which predictions affect the future they are aimed to forecast. In many cases the future predicted by the models does not come about, not because they are wrong but precisely because they are right and are followed.</p>



<p>Think, for example, of traffic flow forecasts in the summer for the so-called smart departures: black, red, yellow days, etc. The models predict that on July 31st at noon there will be traffic jams on highways, while at 2 am one will travel better. If we follow the forecasts, which are reliable and well done, we will all be queuing up on the highway at 2 am—contradicting the prediction.</p>



<p>This circularity affects all forecasting models: if you follow the forecast you risk falsifying it. It is difficult to predict surprises and relying too much on algorithmic forms risks limiting the space of invention and the openness of the future.</p>



<p>Do you see political dangers in relying too much on AI? Is the current hype around the subject a sign of the loss of our sovereignty as societies?</p>



<p>The political dangers are there, but they are not determined directly by technology. The possibilities offered by algorithms can lead to very different political outcomes and risks—from the hype about personalisation promising to unfold the autonomy of individual users to the Chinese ‘social credit’ system, which goes in the opposite direction.</p>



<p>What are your recommendations for using AI in the right way? What should policy-makers consider when formulating ethical guidelines, norms and regulations with this in mind?</p>



<p>Heinz von Foerster had as ethical imperative ‘Act always so as to increase the number of possibilities’. Today more than ever it seems to me a fundamental principle. Especially when we are dealing with very complex conditions, I think it is better to try to learn continuously from current developments than to pretend to know where you want to go.</p>



<p>And incidentally, machine-learning algorithms also work in this way. In these advanced-programming techniques algorithms learn from experience and in a way programme themselves—going in directions that the designers themselves often could not predict.</p>



<p>What is a reasonable expectation of AI? What can we hope for and how can we get there?</p>



<p>What I expect with respect to AI is that the very idea to artificially reproduce human intelligence will be abandoned. The most recent algorithms that use machine learning and big data do not work at all like human intelligence and do not even try to emulate it—and precisely for this reason they are able to perform with great effectiveness tasks that until now were reserved for human intelligence.</p>



<p>Through big data, algorithms ‘feed’ on the differences generated (consciously or unconsciously) by individuals and their behaviour to produce new, surprising and potentially instructive information. Algorithmic processes start from the intelligence of users to operate competently as communication partners, with no need to be intelligent themselves.</p>
<p>The post <a href="https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/">Machine learning should increase human possibilities</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/machine-learning-should-increase-human-possibilities/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and Deep Learning Can be a Robust Oncological Tool</title>
		<link>https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/</link>
					<comments>https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Thu, 30 Apr 2020 10:38:51 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8463</guid>

					<description><![CDATA[<p>Source: enterprisetalk.com Recent advances in intelligent technology and machine algorithms have been helping many oncologists and radiologists for diagnosing cancer. With the spike of digitization in the <a class="read-more-link" href="https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/">AI and Deep Learning Can be a Robust Oncological Tool</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: enterprisetalk.com</p>



<p>Recent advances in intelligent technology and machine algorithms have been helping many oncologists and radiologists for diagnosing cancer. With the spike of digitization in the medical sector, AI and DL artificial intelligence and deep learning have already started to play a notable role in cancer care. However, many medical clinics lack infrastructure – before understanding the full potential of such technology and find it challenging to integrate it into their diagnostic processes. </p>



<p>According to the latest study released by Wiley Online Library in Cancer Communications, titled, “Emerging role of deep learning‐based artificial intelligence in tumor pathology” portrays different insights around the same. The study notes that artificial intelligence is rapidly utilized in order to understand tumor pathology. However, there remains a constant tussle among the pathologists, clinicians, and patients – as there are still some questions unanswered about the technology, its usage, and costs.</p>



<p>In recent years, the analysis of pathology has become exceptional in comparison to human expertise and machine learning (ML). AI helps to reduce the limitations of subjective visual analysis and assessment from the pathologists. Besides, it connects different measurements for precision tumor treatment. Though deep learning is a subset of ML, it functions differently by using hard data rather than subjective components. As a result, AI-powered is outperforming than the older methods, highlighting more accuracy.</p>



<p>One of the vital ways of how DL is improving cancer care is tumor diagnosis. With an analytics and DL base, doctors can now use technology to figure out tumors from other lesions, as well as distinguish among malignant and benign tumors. Besides, it has the potential to identify genetic changes in tumors and biomarkers.&nbsp; As mentioned in the report, “In addition to biopsy and resection specimens, pathologists should perform cytology diagnosis in routine work. For cervical cytological diagnosis, DL‐based AI could classify cells as normal or abnormal in smear‐based and liquid‐based images, reaching an accuracy of 98.3% and 98.6%, respectively.”</p>



<p>Thus simplified DL models are in the process of development and soon be made available to aid clinicians subtype and stage cancers. And concerning the challenges, the researchers say that the algorithms supporting AI and DL technologies require validation on larger scales and should be adapted as the new data become available. The report also notes, “Building comprehensive quality control and standardization tools, data share and validation with multi‐institutional data can increase the generalizability and robustness of the AI algorithms…In addition, AI algorithms need to be continually validated and corrected by the diagnosis of expert pathologists.”</p>



<p>Another area on concern is the images used by the systems – they are mostly of massive file sizes. Hence, saving and sharing them is a vital issue amid the existing information technology (IT) infrastructure. In this regard, IT experts said that the newer advances would ease these problems. The medical industry thus hopes for more progress shortly with improvements in information technology (IT) and more adoption of 5G.</p>



<p>Undoubtedly, with each passing day, technology is manifesting itself as a must-have thing for cancer care – provided the proponents work to gain more scientific rigor, and medical personnel is comfortable with the deployment of tools.</p>
<p>The post <a href="https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/">AI and Deep Learning Can be a Robust Oncological Tool</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/ai-and-deep-learning-can-be-a-robust-oncological-tool/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Deep Reinforcement Learning?</title>
		<link>https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/</link>
					<comments>https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 18 Apr 2020 09:34:11 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[deep learning]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8261</guid>

					<description><![CDATA[<p>Source: Along with unsupervised machine learning and supervised learning, another common form of AI creation is reinforcement learning. Beyond regular reinforcement learning, deep reinforcement learning can lead to astonishingly impressive results, thanks to the <a class="read-more-link" href="https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/">What is Deep Reinforcement Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: </p>



<p>Along with unsupervised machine learning and supervised learning, another common form of AI creation is reinforcement learning. Beyond regular reinforcement learning, deep reinforcement learning can lead to astonishingly impressive results, thanks to the fact that it combines the best aspects of both deep learning and reinforcement learning. Let’s take a look at precisely how deep reinforcement learning operates. Note that this article won’t delve too deeply into the formulas used in deep reinforcement learning, rather it aims to give the reader a high level intution for how the process works.</p>



<p>Before we dive into deep reinforcement learning, it might be a good idea to refresh ourselves on how regular reinforcement learning works. In reinforcement learning, goal-oriented algorithms are designed through a process of trial and error, optimizing for the action that leads to the best result/the action that gains the most “reward”. When reinforcement learning algorithms are trained, they are given “rewards” or “punishments” that influence which actions they will take in the future. Algorithms try to find a set of actions that will provide the system with the most reward, balancing both immediate and future rewards.</p>



<p>Reinforcement learning algorithms are very powerful because they can be applied to almost any task, being able to flexibly and dynamically learn from an environment and discover possible actions.</p>



<h4 class="wp-block-heading">Overview of Deep Reinforcement Learning</h4>



<p>When it comes to deep reinforcement learning, the environment is typically represented with images. An image is a capture of the environment at a particular point in time. The agent must analyze the images and extract relevant information from them, using the information to inform which action they should take. Deep reinforcement learning is typically carried out with one of two different techniques: value-based learning and policy-based learning.</p>



<p>Value-based learning techniques make use of algorithms and architectures like convolutional neural networks and Deep-Q-Networks. These algorithms operate by converting the image to greyscale and cropping out unnecessary parts of the image. Afterward, the image undergoes various convolutions and pooling operations, extracting the most relevant portions of the image. The important parts of the image are then used to calculate the Q-value for the different actions the agent can take. Q-values are used to determine the best course of action for the agent. After the initial Q-values are calculated, backpropagation is carried out in order that the most accurate Q-values can be determined.</p>



<p>Policy-based methods are used when the number of possible actions that the agent can take is extremely high, which is typically the case in real-world scenarios. Situations like these require a different approach because calculating the Q-values for all the individual actions isn’t pragmatic. Policy-based approaches operate without calculating function values for individual actions. Instead, they adopt policies by learning the policy directly, often through techniques called Policy Gradients.</p>



<p>Policy gradients operate by receiving a state and calculating probabilities for actions based on the agent’s prior experiences. The most probable action is then selected. This process is repeated until the end of the evaluation period and the rewards are given to the agent. After the rewards have been dealt with the agent, the network’s parameters are updated with backpropagation.</p>



<h4 class="wp-block-heading">A Closer Look at Q-Learning</h4>



<p>Because Q-Learning is such a large part of the deep reinforcement learning process, let’s take some time to really understand how the Q-learning system works.</p>



<p><strong>The Markov Decision Process</strong></p>



<p>In order for an AI agent to carry out a series of tasks and reach a goal, the agent must be able to deal with a sequence of states and events. The agent will begin at one state and it must take a series of actions to reach an end state, and there can be a massive number of states existing between the beginning and end states. Storing information regarding every state is impractical or impossible, so the system must find a way to preserve just the most relevant state information. This is accomplished through the use of a Markov Decision Process, which preserves just the information regarding the current state and the previous state.  Every state follows a Markov property, which tracks how the agent change from the previous state to the current state.</p>



<p><strong>Deep Q-Learning</strong></p>



<p>Once the model has access to information about the states of the learning environment, Q-values can be calculated. The Q-values are the total reward given to the agent at the end of a sequence of actions.</p>



<p>The Q-values are calculated with a series of rewards. There is an immediate reward, calculated at the current state and depending on the current action. The Q-value for the subsequent state is also calculated, along with the Q-value for the state after that, and so on until all the Q-values for the different states have been calculated. There is also a Gamma parameter that is used to control how much weight future rewards have on the agent’s actions. Policies are typically calculated by randomly initializing Q-values and letting the model converge toward the optimal Q-values over the course of training.</p>



<p><strong>Deep Q-Networks</strong></p>



<p>One of the fundamental problems involving the use of Q-learning for reinforcement learning is that the amount of memory required to store data rapidly expands as the number of states increases. Deep Q Networks solve this problem by combining neural network models with Q-values, enabling an agent to learn from experience and make reasonable guesses about the best actions to take. With deep Q-learning, the Q-value functions are estimated with neural networks. The neural network takes the state in as the input data, and the network outputs Q-value for all the different possible actions the agent might take.</p>



<p>Deep Q-learning is accomplished by storing all the past experiences in memory, calculating maximum outputs for the Q-network, and then using a loss function to calculate the difference between current values and the theoretical highest possible values.</p>



<p><strong>Deep Reinforcement Learning vs Deep Learning</strong></p>



<p>One important difference between deep reinforcement learning and regular deep learning is that in the case of the former the inputs are constantly changing, which isn’t the case in traditional deep learning. How can the learning model account for inputs and outputs that are constantly shifting?</p>



<p>Essentially, to account for the divergence between predicted values and target values, two neural networks can be used instead of one. One network estimates the target values, while the other network is responsible for the predictions. The parameters of the target network are updated as the model learns, after a chosen number of training iterations have passed. The outputs of the respective networks are then joined together to determine the difference.</p>



<h4 class="wp-block-heading">Policy-Based Learning</h4>



<p>Policy-based learning approaches operate differently than Q-value based approaches. While Q-value approaches create a value function that predicts rewards for states and actions, policy-based methods determine a policy that will map states to actions. In other words, the policy function that selects for actions is directly optimized without regard to the value function.</p>



<p><strong>Policy Gradients</strong></p>



<p>A policy for deep reinforcement learning falls into one of two categories: stochastic or deterministic. A deterministic policy is one where states are mapped to actions, meaning that when the policy is given information about a state an action is returned. Meanwhile, stochastic policies return a probability distribution for actions instead of a single, discrete action.</p>



<p>Deterministic policies are used when there is no uncertainty about the outcomes of the actions that can be taken. In other words, when the environment itself is deterministic. In contrast, stochastic policy outputs are appropriate for environments where the outcome of actions is uncertain. Typically, reinforcement learning scenarios involve some degree of uncertainty so stochastic policies are used.</p>



<p>Policy gradient approaches have a few advantages over Q-learning approaches, as well as some disadvantages. In terms of advantages, policy-based methods converge on optimal parameters quicker and more reliably. The policy gradient can just be followed until the best parameters are determined, whereas with value-based methods small changes in estimated action values can lead to large changes in actions and their associated parameters.</p>



<p>Policy gradients work better for high dimensional action spaces as well. When there is an extremely high number of possible actions to take, deep Q-learning becomes impractical because it must assign a score to every possible action for all time steps, which may be impossible computationally. However, with policy-based methods, the parameters are adjusted over time and the number of possible best parameters quickly shrinks as the model converges.</p>



<p>Policy gradients are also capable of implementing stochastic policies, unlike value-based policies. Because stochastic policies produce a probability distribution, an exploration/exploitation trade-off does not need to be implemented.</p>



<p>In terms of disadvantages, the main disadvantage of policy gradients is that they can get stuck while searching for optimal parameters, focusing only on a narrow, local set of optimum values instead of the global optimum values.</p>



<p><strong>Policy Score Function</strong></p>



<p>The policies used to optimize a model’s performance aim to maximize a score function – J(θ). If J(θ) is a measure of how good our policy is for achieving the desired goal, we can find the values of “θ” that gives us the best policy. First, we need to calculate an expected policy reward. We estimate the policy reward so we have an objective, something to optimize towards. The Policy Score Function is how we calculate the expected policy reward, and there are different Policy Score Functions that are commonly used, such as: start values for episodic environments, the average value for continuous environments, and the average reward per time step.</p>



<p><strong>Policy Gradient Ascent</strong></p>



<p>After the desired Policy Score Function is used, and an expected policy reward calculated, we can find a value for the parameter “θ” which maximizes the score function. In order to maximize the score function J(θ), a technique called “gradient ascent” is used. Gradient ascent is similar in concept to gradient descent in deep learning, but we are optimizing for the steepest increase instead of decrease. This is because our score is not “error”, like in many deep learning problems. Our score is something we want to maximize. An expression called the Policy Gradient Theorem is used to estimate the gradient with respect to policy “θ”.</p>



<h4 class="wp-block-heading">Summing Up</h4>



<p>In summary, deep reinforcement learning combines aspects of reinforcement learning and deep neural networks. Deep reinforcement learning is done with two different techniques: Deep Q-learning and policy gradients.</p>



<p>Deep Q-learning methods aim to predict which rewards will follow certain actions taken in a given state, while policy gradient approaches aim to optimize the action space, predicting the actions themselves. Policy-based approaches to deep reinforcement learning are either deterministic or stochastic in nature. Deterministic policies map states directly to actions while stochastic policies produce probability distributions for actions.</p>
<p>The post <a href="https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/">What is Deep Reinforcement Learning?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/what-is-deep-reinforcement-learning-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</title>
		<link>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/</link>
					<comments>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Apr 2020 06:44:55 +0000</pubDate>
				<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google Developers]]></category>
		<category><![CDATA[machine learning (ML)]]></category>
		<category><![CDATA[scientists]]></category>
		<category><![CDATA[software]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8218</guid>

					<description><![CDATA[<p>Source: iflscience.com Machine learning (ML) is a method by which algorithms adapt their activity using inputted data, rather than being programmed to do so. But building and <a class="read-more-link" href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: iflscience.com</p>



<p>Machine learning (ML) is a method by which algorithms adapt their activity using inputted data, rather than being programmed to do so. But building and “training” these algorithms takes time, and can often ingrain human biases.</p>



<p>To overcome these limitations, and enable further innovation in machine learning, researchers have explored the field of AutoML, whereby the machine learning process can be progressively automated, relying on machine compute time, rather than human research time.</p>



<p>So far, although some steps have been automated, the benchmark of virtually zero human input has yet to be attained. However, a team of scientists from Google have seen some “preliminary success” in discovering machine learning algorithms from scratch, indicating a “promising new direction for the field.”</p>



<p>In a paper, published on the preprint server arXiv, Quoc Le, a computer scientist at Google, and colleagues, employed concepts from Darwinian evolution, such as natural selection, to enable ML algorithms to improve generation upon generation. Combining basic mathematical operations, their program, called AutoML-Zero, generated 100 unique algorithms that they then tested on simple tasks, such as image recognition.</p>



<p>After their performance was compared to hand-designed algorithms, the best were kept, and small random “mutations” in their code were introduced, whilst the weaker candidates were removed. As the cycle continued, a high-performing set of algorithms were found, some of which are comparable to a number of classic machine learning techniques – such as neural networks (a kind of computer program that loosely mimics how our brain cells work together to make decisions).</p>



<p>This proves the team’s concept, Le told Science Magazine, but he is hopeful that the processes can be scaled up to eventually create much more complex AIs, which human researchers could never find.</p>



<p>“Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks,” the team wrote in the paper, which is awaiting peer-review.</p>



<p>“Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent, multiplicative interactions, weight averaging, normalized gradients, etc.” the authors continued. “These results are promising, but there is still much work to be done.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/">Google Scientists Develop Software That Could Enable AI To Evolve With No Human Input</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/google-scientists-develop-software-that-could-enable-ai-to-evolve-with-no-human-input/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
