<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>modernize Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/modernize/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/modernize/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 12 Jun 2021 05:18:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>HOW ARTIFICIAL INTELLIGENCE IS FAVORABLE TO MODERNIZE THE METHODS USED FOR VULNERABILITY ASSESSMENTS</title>
		<link>https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/</link>
					<comments>https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 12 Jun 2021 05:18:54 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ASSESSMENTS]]></category>
		<category><![CDATA[FAVORABLE]]></category>
		<category><![CDATA[Methods]]></category>
		<category><![CDATA[modernize]]></category>
		<category><![CDATA[VULNERABILITY]]></category>
		<guid isPermaLink="false">https://www.aiuniverse.xyz/?p=14227</guid>

					<description><![CDATA[<p>Source &#8211; https://blog.eccouncil.org/ Artificial Intelligence has now been incorporated in various fields with vast development and implementation, which have been proven to be of great benefit. Artificial <a class="read-more-link" href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/">HOW ARTIFICIAL INTELLIGENCE IS FAVORABLE TO MODERNIZE THE METHODS USED FOR VULNERABILITY ASSESSMENTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source &#8211; https://blog.eccouncil.org/</p>



<p>Artificial Intelligence has now been incorporated in various fields with vast development and implementation, which have been proven to be of great benefit. Artificial Intelligence, also known as AI, is the stimulation of human intelligence in machines. They are programmed meticulously to think like human beings and replicate their activities. The main objective of an AI simulated machine is to achieve and perform all the activities done by a human being. Unlike human beings, AI machines are not forgetful as they are built with colossal storage to record all the required data. They never get tired, run on processers, and finish a specific task much faster than a human being. With the benefits it comes with, AI has now gained popularity and has been accepted into various sectors like the food industry as waiters and chefs, organizations to calculate and work on special projects, healthcare organizations to analyze and programmed to treat patients as well as perform operations and especially in the cybersecurity sector in training data modules to learn how to react to various situations and detecting anomalies/threats and risk based on patterns generated, vulnerability assessments, etc.</p>



<p>AI has been proven to be beneficial in cybersecurity and has many advantages when detecting vulnerabilities and managing them. AI techniques and machine learning can be a great combination to resolve cyber-related threats, risks, and attacks, especially in vulnerability management to prevent attacks beforehand.</p>



<h2 class="wp-block-heading"><strong>Defining AI and Vulnerability Assessment</strong></h2>



<p>AI is a blanket term consisting of numerous advanced computer science areas ranging from voice detection to typical language processing, robotics, and deep representational learning. Scientists and researchers aim to automate intelligent behavior in machines that are capable of doing human tasks. AI scientists and technologists are continuously seeking various methodologies to automate “intelligent” behavior. A single AI component used expansively in several applications is machine learning — the algorithms that support historical data/information to forecast or make decisions about a particular action. More extensive the historical data, the machine learning’s decision-making capabilities improve and make better and accurate predictions about situations or circumstances and are termed as getting smarter. It advances with time and without human interference.</p>



<p>Vulnerability assessment<em> is defined as the systematic review of security weaknesses in a system or network. It assesses if the system is prone to any known vulnerabilities. If yes, it assigns severity levels to those vulnerabilities and suggests mitigation methods. The scanning process to identify vulnerabilities and resolve them </em>are categorized into the following steps:</p>



<ul class="wp-block-list"><li>Vulnerability Identification</li><li>Analysis</li><li>Risk Assessments</li><li>Remediation</li></ul>



<h2 class="wp-block-heading"><strong>Development of a Significant Vulnerability Risk Score</strong></h2>



<p>The vulnerability score is significantly similar to the risk score attached to the vulnerabilitiesin the&nbsp;Critical Vulnerabilities and Exposures [CVE] program.&nbsp;The CVE comprises a list of records/data which contains a unique identification number. The unique identification number is used to identify, define, and catalog the vulnerabilities, which are publicly disclosed.&nbsp;It can also be incorporated into the products and services as per the terms of use. Though CVE is useful in determining the vulnerability and its possible risk severity, it lacks context, making it difficult to rationalize certain aspects. A vulnerability may be assigned a high-risk score but, on the specific network, the affected place may be secluded on a secured subnet, or not connected to the internet, or maybe on a device or program which has no operations or services, resulting in little or no risk to the organization. The CVE is excellent to kickstart for context-based risk analysis. Once the asset/device context is attained, it is combined with the knowledge and external threat environment, generating the context-driven priority that is accurate. This can be used to determine the vulnerability severity or the importance of the vulnerability risk/threats.</p>



<h2 class="wp-block-heading"><strong>Vulnerability Exploitation: The Latest Trends</strong></h2>



<p>Various brand marketers use AI-based analyses to assess the posts of their products or services on different social media platforms.&nbsp; The result enables the brand marketing employees to understand how the public perceives their products and how it changes and how it has changed over a period of time. This is achieved by AI application, and the data is collected over time and compared to decide what is lacking and what should be improved. Similarly, cybersecurity chat boards and other online sources of cybersecurity information and interaction can be collected and analyzed. This analysis is done by AI technology, which can identify the vulnerability that is exploited chiefly based on the data collected. The technology used to analyze the data collected from multiple interactions, polls, and other information is the Neural Networks and Natural Language Processing (NLP) techniques. The NLP technique can recognize the exact meaning, positive and negative traits, accurate technical information from the transcript. AI is responsible for interpreting vast amounts of data and merging their meanings to gain context for the risks of the given vulnerabilities.</p>



<h2 class="wp-block-heading"><strong>Asset Detection</strong></h2>



<p>It is relatively important to detect all the assets/devices for an effective vulnerability assessment, especially those atypical/uncategorized in a given context. Conventional methods are not efficient to detect uncategorized information/data/assets, such as a Linux server in windows machine with database services.  These types of conditions require at-most priority from security teams. Pattern recognition, AI techniques are implemented to identify and distinguish uncategorized/unique assets. Novelty, Anomaly Detection, or outlier detection methods/algorithms are enforced to identify uncategorized assets. The most known effective algorithm is the Isolation Forest, where numerous multidimensional representations are used to compare the characteristics of the assets/devices. The uncategorized assets are detected and flagged for identification purposes.</p>



<h2 class="wp-block-heading"><strong>etection Reliability Assessment</strong></h2>



<p>It is crucial to determine whether a vulnerability is exploitable or not as the process of vulnerability detection involves a high range of false positives. AI methods and techniques can be implemented in detecting the vulnerabilities, which significantly reduces the number of false-positive outcomes by detecting the misdetections. Various services like services running and others, and the vulnerability which was flagged as a result of the detection method, are used to confirm the legitimacy of the identified vulnerability. With experience, the ability of AI machines can accurately detect false positives from legit vulnerabilities.</p>



<p>Bayesian Networks are used to improve the reliability of vulnerability detection, to determine if the vulnerability is legit or not. The technique includes other observations as pieces of evidence in the assessment procedure. Bayesian networks are far more efficient and effective and, when implied, promote intelligence analysis which results in balancing the defective scanning techniques with the help of proficient human knowledge.</p>



<h2 class="wp-block-heading"><strong>Leveraging Industry Vulnerability Remediation Priority Data</strong></h2>



<p>Every contemporary vulnerability assessment product has cloud-based components in them, and some are completely cloud-based. Cloud-based vulnerability assessment/management platforms are extremely beneficial. One of the most important benefits is the anonymization of user data which can be reduced and discarded from the applications. Every single organization is regularly remediating vulnerabilities daily. Several remediation procedures over several customers are performed, and cloud-based vulnerability assessment products have a rich data source on which AI engine can be used. The source undergoes constant changes due to various factors and collects data. This can either strengthen or contradict the conventional remediation methods of vulnerability prioritization. AI can be applied to actual vulnerability remediation data, resulting in yielding insights based on various sources’ shared judgments. Gradient Boosted Tree Regression is a machine learning technique that, when combined with user behavioral patterns and preferences, results in predicting what is essential, which helps understand and remediate vulnerabilities.</p>



<h2 class="wp-block-heading"><strong>Remediation Plan Recommendations</strong></h2>



<p>A list of vulnerabilities is entrenched based on a context that is achieved using AI techniques. Enhancing and focused on delivering solutions is the last step in the vulnerability assessment procedure. AI has a major role in achieving the necessary solutions based on vast programs and algorithms used to differentiate anomalies, threats, risks, etc. AI techniques speed up the detection process and provide a solution in a lesser period, maximizing the risk reduction while still minimizing remediation activity. Risk-Aware Recommender System is a hybrid between collaborative filtering and content-based systems, resulting in multiple remediation situations. The vulnerability management Recommender System considers the risk degradation that the remediation situations can afford with the help of the AI-generated risk scores.</p>



<p>Many progressions have been made in the field of cybersecurity with the help of various AI techniques. There is lesser human interference with AI machines and techniques, resulting in greater accuracy of the results, and it is a fast process. With the increase in the complexity range and other risk factors, it can help take off the load from the conventional vulnerability management team and efficiently store data and detect situations detected long ago.</p>



<p>AI can be very useful in the field of cybersecurity. It is fast and can predict and identify potential threats, risks, and vulnerabilities present in the system. It is also responsible for mitigating the risk factors and obtaining a feasible solution. Though AI provides solutions, the security team needs to practice privacy enhancement methods, which is crucial in developing and implementing security habits that can help individuals take preventive measures beforehand in all aspects. One can learn how to imply the AI techniques and methodologies in the cybersecurity field by pursuing ethical hacking essential to understand the vulnerability assessment procedures and vulnerability assessment tools required to mitigate potentially exploitable vulnerabilities. One such top ethical hacking certification course is the EC-Council’s Certified Ethical Hacker (CEH) Certification which provides in-depth knowledge about the role of a hacker and other components which is essential to maintain the security of the cyber environment and provides ethical hacking training that is, practical – hands-on problems to train the individual to deal with real-world problems.</p>



<p>Certified Ethical Hacker is one such course that enables an individual to learn about the role of AI and how it is implemented in cybersecurity to reduce and mitigate risks, vulnerabilities, and other factors.</p>



<p></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/">HOW ARTIFICIAL INTELLIGENCE IS FAVORABLE TO MODERNIZE THE METHODS USED FOR VULNERABILITY ASSESSMENTS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-artificial-intelligence-is-favorable-to-modernize-the-methods-used-for-vulnerability-assessments/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>OpenLegacy raises $20M to help businesses modernize legacy applications</title>
		<link>https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/</link>
					<comments>https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Mon, 03 Feb 2020 07:05:58 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Businesses]]></category>
		<category><![CDATA[loud-native era]]></category>
		<category><![CDATA[modernize]]></category>
		<category><![CDATA[OpenLegacy]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6485</guid>

					<description><![CDATA[<p>Source: siliconangle.com Application modernization startup OpenLegacy Inc. has raised $20 million to help it pursue its mission of helping financial services companies move their information technology infrastructures into the cloud-native era. <a class="read-more-link" href="https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/">OpenLegacy raises $20M to help businesses modernize legacy applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: siliconangle.com</p>



<p>Application modernization startup OpenLegacy Inc. has raised $20 million to help it pursue its mission of helping financial services companies move their information technology infrastructures into the cloud-native era.</p>



<p>Japan’s SBI Holdings Inc. was the sole investor in the latest round, announced late last week. It brings OpenLegacy’s total funding to date to $70 million.</p>



<p>Princeton, New Jersey-based OpenLegacy is an interesting startup that helps organizations breathe new life into older, legacy business applications. It does so by automating and standardizing the process of extending them with microservices-based application programming interfaces. The APIs make it possible for developers to add new functions to older applications without making risky and expensive changes to their code base.</p>



<p>Microservices are self-contained software functions that usually run a single business process and can be deployed as needed. Many modern applications are now constructed entirely of microservices because the architecture is more flexible, faster and reliable than that of traditional monolithic applications.</p>



<p>Although many companies have toolkits for building and managing APIs, OpenLegacy claims to be the only one targeting the large installed base of enterprise applications that were created before the advent of cloud services. It says its software can integrate APIs into complex legacy applications within minutes.</p>



<p>SBI Holdings said it was using OpenLegacy’s technology to build new Internet banking features for its portfolio of Japanese banks, and to create a new blockchain-based payment app called Moneytap.</p>



<p>“OpenLegacy will enable our portfolio companies to quickly launch digital innovations, integrating, leveraging, and extending our legacy systems in a fraction of the time, all without changing the underlying systems,” Yoshitaka Kitao, SBI Holdings’ chief executive officer, said in a statement.</p>



<p>SBI Holdings joins a number of financial services companies that are already using OpenLegacy’s tools to modernize their apps and services, including Citigroup Inc., Liberty Mutual Group and BNP Paribas Group.</p>
<p>The post <a href="https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/">OpenLegacy raises $20M to help businesses modernize legacy applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/openlegacy-raises-20m-to-help-businesses-modernize-legacy-applications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How machine learning and automation can modernize the network edge</title>
		<link>https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/</link>
					<comments>https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 18 Jan 2020 07:35:20 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[modernize]]></category>
		<category><![CDATA[network]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6234</guid>

					<description><![CDATA[<p>Source: siliconangle.com Applications are expected to move from data centers to edge facilities in record numbers, opening up a huge new market opportunity. The edge computing market <a class="read-more-link" href="https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/">How machine learning and automation can modernize the network edge</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: siliconangle.com</p>



<p>Applications are expected to move from data centers to edge facilities in record numbers, opening up a huge new market opportunity. The edge computing market is expected to grow at a compound annual growth rate of 36.3 percent between now and 2022, fueled by rapid adoption of the “internet of things,” autonomous vehicles, high-speed trading, content streaming and multiplayer games.</p>



<p>What these applications have in common is a need for near zero-latency data transfer, usually defined as less than five milliseconds, although even that figure is far too high for many emerging technologies.&nbsp;&nbsp;</p>



<p>The specific factors driving the need for low latency vary. In IoT applications, sensors and other devices capture enormous quantities of data, the value of which degrades by the millisecond. Autonomous vehicles require information in real-time to navigate effectively and avoid collisions. The best way to support such latency-sensitive applications is to move applications and data as close as possible to the data ingestion point, therefore reducing the overall round-trip time. Financial transactions now occur at sub-millisecond cycle times, leading one brokerage firm to invest more than $100 million to overhaul its stock trading platform in a quest for faster and faster trades.</p>



<h3 class="wp-block-heading">Operational challenges</h3>



<p>As edge computing grows, so do the operational challenges for telecommunications service provider such as Verizon Communications Inc., AT&amp;T Corp. and T-Mobile USA Inc. For one thing, moving to the edge essentially disaggregates the traditional data center. Instead of massive numbers of servers located in a few centralized data centers, the provider edge infrastructure consists of thousands of small sites, most with just a handful of servers. All of those sites require support to ensure peak performance, which strains the resources of the typical information technology group to the breaking point — and sometimes beyond.&nbsp;</p>



<p>Another complicating factor is network functions moving toward cloud-native applications deployed on virtualized, shared and elastic infrastructure, a trend that has been accelerating in recent years. In a virtualized environment, each physical server hosts dozens of virtual machines and/or containers that are constantly being created and destroyed at rates far faster than humans can effectively manage. Orchestration tools automatically manage the dynamic virtual environment in normal operation, but when it comes to troubleshooting, humans are still in the driver’s seat.&nbsp;</p>



<p>And it’s a hot seat to be in. Poor performance and service disruptions hurt the service provider’s business, so the organization puts enormous pressure on the IT staff to resolve problems quickly and effectively. The information needed to identify root causes is usually there. In fact, navigating the sheer volume of telemetry data from hardware and software components is one of the challenges facing network operators today.&nbsp;</p>



<h3 class="wp-block-heading">Machine learning and automation&nbsp;</h3>



<p>A data-rich, highly dynamic, dispersed infrastructure is the perfect environment for artificial intelligence, specifically machine learning. The great strength of machine learning is the ability to find meaningful patterns in massive amounts of data that far outstrip the capabilities of network operators. Machine learning-based tools can self-learn from experience, adapt to new information and perform humanlike analyses with superhuman speed and accuracy.&nbsp;&nbsp;</p>



<p>To realize the full power of machine learning, insights must be translated into action — a significant challenge in the dynamic, disaggregated world of edge computing. That’s where automation comes in.</p>



<p>Using the information gained by machine learning and real-time monitoring, automated tools can provision, instantiate and configure physical and virtual network functions far faster and more accurately than a human operator. The combination of machine learning and automation saves considerable staff time, which can be redirected to more strategic initiatives that create additional operational efficiencies and speed release cycles, ultimately driving additional revenue.&nbsp;</p>



<h3 class="wp-block-heading">Scaling cloud-native applications</h3>



<p>Until recently, the software development process for a typical telco consisted of a lengthy sequence of discrete stages that moved from department to department and took months or even years to complete. Cloud-native development has largely made obsolete this so-called “waterfall” methodology in favor of a high-velocity, integrated approach based on leading-edge technologies such as microservices, containers, agile development, continuous integration/continuous deployment and DevOps. As a result, telecom providers roll out services at unheard-of velocities, often multiple releases per week.&nbsp;</p>



<p>The move to the edge poses challenges for scaling cloud-native applications. When the environment consists of a few centralized data centers, human operators can manually determine the optimum configuration needed to ensure the proper performance for the virtual network functions or VNFs that make up the application.</p>



<p>However, as the environment disaggregates into thousands of small sites, each with slightly different operational characteristics, machine learning is required. Unsupervised learning algorithms can run all the individual components through a pre-production cycle to evaluate how they will behave in a production site. Operations staff can use this approach to develop a high level of confidence that the VNF being tested is going to come up in the desired operational state at the edge.&nbsp;</p>



<h3 class="wp-block-heading">Troubleshooting at the speed of AI&nbsp;</h3>



<p>AI and automation can also add significant value in troubleshooting within cloud-native environments. Take the case of a service provider running 10 instances of a voice call processing application as a cloud-native application at an edge location. A remote operator notices that one VNF is performing significantly below the other nine.&nbsp;&nbsp;</p>



<p>The first question is, “Do we really have a problem?” Some variation in performance between application instances is not unusual, so answering the question requires a determination of the normal range of VNF performance values in actual operation. A human operator could take readings of a large number of instances of the VNF over a specified time period and then calculate the acceptable key performance indicator values — a time-consuming and error-prone process that must repeated frequently to account for software upgrades, component replacements, traffic pattern variations and other parameters that affect performance.</p>



<p>In contrast, AI can determine KPIs in a fraction of the time and adjust the KPI values as needed when parameters change, all with no outside intervention. Once AI determines the KPI values, automation takes over. An automated tool can continuously monitor performance, compare the actual value to the AI-determined KPI and identify underperforming VNFs.</p>



<p>That information can then be forwarded to the orchestrator for remedial action such as spinning up a new VNF or moving the VNF to a new physical server. The combination of AI and automation helps ensure compliance with service-level agreements and removes the need for human intervention — a welcome change for operators weary of late-night troubleshooting sessions.&nbsp;</p>



<h3 class="wp-block-heading">Harnessing the competitive edge</h3>



<p>As service providers accelerate their adoption of edge-oriented architectures, IT groups must find new ways to optimize network operations, troubleshoot underperforming VNFs and ensure SLA compliance at scale. Artificial intelligence technologies such as machine learning, combined with automation, can help them do that.</p>



<p>In particular, there have been a number of advancements over the last few years to enable this AI-driven future. They include systems and devices to provide high-fidelity, high-frequency telemetry that can be analyzed, highly scalable message buses such as Kafka and Redis that can capture and process that telemetry, and compute capacity and AI frameworks such as TensorFlow and PyTorch to create models from the raw telemetry streams. Taken together, they can determine in real time if operations of production systems are in conformance with standards and find problems when there are disruptions in operations.</p>



<p>All that has the potential to streamline operations and give service providers a competitive edge — at the edge.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/">How machine learning and automation can modernize the network edge</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-machine-learning-and-automation-can-modernize-the-network-edge/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
