<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Transformation Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/transformation/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/transformation/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Sat, 04 Jul 2020 08:56:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Neutrinos, eBaoTech partner to offer digital insurance applications</title>
		<link>https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/</link>
					<comments>https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 04 Jul 2020 08:56:09 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[digital insurance]]></category>
		<category><![CDATA[eBaoTech]]></category>
		<category><![CDATA[Microservice]]></category>
		<category><![CDATA[Neutrinos]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9990</guid>

					<description><![CDATA[<p>Source: verdict.co.uk BaoTech Corporation and Neutrinos have partnered to offer new-age digital insurance applications to insurance carriers, brokers and InsurTechs. These applications will be powered by eBaoTech’s <a class="read-more-link" href="https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/">Neutrinos, eBaoTech partner to offer digital insurance applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: verdict.co.uk</p>



<p>BaoTech Corporation and Neutrinos have partnered to offer new-age digital insurance applications to insurance carriers, brokers and InsurTechs.</p>



<p>These applications will be powered by eBaoTech’s Insurance PaaS platform eBaoCloud InsureMO (InsureMO) and Neutrinos’ Low Code Multi-Experience Development Platform (MXDP).</p>



<p>Joint customers can utilise the front-end application development capabilities for Omni-channels offered by Neutrinos’ Low Code platform.</p>



<p>They will also able to leverage the rich insurance APIs for policy whole lifecycle provided by eBaoCloud InsureMO.</p>



<p>eBaoCloud InsureMO is a containerised industry middleware based on microservices architecture. &nbsp;It is said to accelerate fast innovations and deep connectivity for insurers, brokers, agents, MGA, affinity channels, and InsurTech startups.</p>



<p>The platform includes common APIs needed to manage the whole life cycle of General, Life and Health insurance policies, such as quotation, illustration, underwriting, payment, and claims.</p>



<p>Furthermore, it can seamlessly integrate with external applications and services such as OCR, voice recognition, payment and location by API calls.</p>



<p>Neutrinos’ MXDP offers a range of digital insurance distribution solutions to help insurers rapidly build applications on disruptive technologies and cater to customer demands.</p>



<p>The Neutrinos Suite of Insurance solutions is said to offer cost, resource, and time optimization.</p>



<p>Commenting on the latest development, eBaoTech corporate vice-president and Sales and Strategy head Rajat Sharma said: “We are excited to partner with Neutrinos and leverage the power of Low code platforms to deliver Insurance applications powered by InsureMO.</p>



<p>“With over 3,000 products from over 120 insurance companies across more than 10 countries configured on eBaoCloud InsureMO, we are sure that together with Neutrinos we will create enormous value for rapid digital transformation in the Insurance industry.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/">Neutrinos, eBaoTech partner to offer digital insurance applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/neutrinos-ebaotech-partner-to-offer-digital-insurance-applications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Guardian Life steers tech transformation with microservices</title>
		<link>https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/</link>
					<comments>https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 30 May 2020 10:10:59 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9158</guid>

					<description><![CDATA[<p>Source: ciodive.com By now, the benefits of a cloud-based model are well documented, highlighting the potential for flexibility, agility and an augmented cost structure. More lamentation is overkill. <a class="read-more-link" href="https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/">Guardian Life steers tech transformation with microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: ciodive.com</p>



<p>By now, the benefits of a cloud-based model are well documented, highlighting the potential for flexibility, agility and an augmented cost structure. More lamentation is overkill.</p>



<p>The next sign of industry maturity is not acceptance —&nbsp;it&#8217;s technique. Each cloud implementation requires a tailored approach, accounting for business needs and objectives. And that can vary from one department to the next.&nbsp;</p>



<p>While alluring to shift from a legacy architecture, cloud migrations require a clear tie to underlying business goals. Without them, it&#8217;s transformation for transformation&#8217;s sake, a method doomed from the outset, especially as CFOs question technology spending.</p>



<p>Guardian Life Insurance Company of America is undergoing a more than four year technology modernization roadmap to replatform products, reduce costs and shrink the time to develop new offerings. It requires moving off of mainframes, the reliable technology backbone of the financial services sector. </p>



<p>&#8220;Getting rid of the legacy is certainly a goal and a product of what we&#8217;re doing,&#8221; John Furlong, VP and head of business transformation, Guardian Life Insurance Company of America, told CIO Dive. The mainframe was in many ways a &#8220;limitation or inhibitor&#8221; and as a byproduct is becoming phased out.&nbsp;</p>



<p>Guardian Life has clear goals tied to its business transformation, Furlong said:&nbsp;</p>



<ul class="wp-block-list"><li>Future-proof products and services and allow for more responsive time to market, including the quick introduction of new products.&nbsp;</li><li>Understand how customers want to interact with Guardian, outside of legacy operating models and provided in an omnichannel setting.</li><li>Channel expansion as more employers and brokers want to connect with Guardian using APIs and digital pathways.</li><li>Introducing economies of scale, efficiency and effectivenes​s.</li></ul>



<p>Guardian&#8217;s transformation is built on cloud-enabled microservices, modernizing from a largely mainframe-based technology stack in its middle and back office.</p>



<p>In microservices, an app is decomposed into services that are modular and independent, Arun Chandrasekaran, distinguished research vice president at Gartner, told CIO Dive. The technology stack takes on a more horizontal architecture, compared to the vertically-inclined monolithic applications.&nbsp;</p>



<p>Think of it as a Lego block, Chandrasekaran said. In microservices, there are numerous Lego pieces serving as independent building blocks. With that architecture, there is no single point of failure, because each is independent and modular.&nbsp;</p>



<p>One failure won&#8217;t break the system, he said.&nbsp;</p>



<h3 class="wp-block-heading"><strong>Where microservices comes in</strong></h3>



<p>The fourth-largest mutual life insurance company in the U.S., Guardian&#8217;s business is split in two main parts: </p>



<ul class="wp-block-list"><li>The group business, its fastest growing part, representing 40-45% of annual firm revenues, Furlong said.&nbsp;</li><li>Individual markets, including life, disability and annuity</li></ul>



<p>The bulk of the transformation is focused on the group business, which has an off-mainframe goal, running on unified cloud platforms built around a common architecture, according to Furlong.&nbsp;</p>



<p>The company works with solutions provider&nbsp;DXC&nbsp;Technology, which operates about 60% of Guardian&#8217;s application development and helps its innovation program and production support.&nbsp;</p>



<p>The end goal is accelerating time to market for products. If Guardian adds a product today, cycle time takes more than a year, Furlong said. Before a product can move forward, Guardian has to define product rules in about 10 groups systems.&nbsp;</p>



<p>In each case, Guardian has to define the same product in the technology or terminology the corresponding system knows, Furlong said. In microservices, one definition works across systems.&nbsp;</p>



<p>Adopting microservices separates front- and back-end components allowing companies to more rapidly scale workloads, said Chandrasekaran. It also improves availability and resiliency.&nbsp;</p>



<p>If there is a failure, it&#8217;s isolated and its impact does not trickle to other services. Technology industry leaders such as Netflix rely on microservices to quickly deliver offerings.</p>



<p>Companies do not have to migrate off legacy infrastructure to adopt microservces. But cloud-based systems offer the agility microservices require, including the tools and APIs that support it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/">Guardian Life steers tech transformation with microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/guardian-life-steers-tech-transformation-with-microservices/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>THE FOUR MAJOR TYPES OF DATA ANALYTICS</title>
		<link>https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/</link>
					<comments>https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 26 May 2020 06:27:17 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=9013</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The big data transformation has brought forth various types, types and phases of data analysis. Meeting rooms across organizations are humming around with data analytics, offering enterprise-wide <a class="read-more-link" href="https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/">THE FOUR MAJOR TYPES OF DATA ANALYTICS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The big data transformation has brought forth various types, types and phases of data analysis. Meeting rooms across organizations are humming around with data analytics, offering enterprise-wide solutions for business achievement. In any case, what do these truly mean to organizations? The key to organizations effectively utilizing Big Data, is by picking up the correct data which conveys knowledge, that enables organizations to gain a serious edge. The principle objective of big data analytics is to assist companies with settling on smarter decisions for better business outcomes.</p>



<p>The four predominant kinds of analytics– Descriptive, Diagnostic, Predictive and Prescriptive analytics, are interrelated solutions helping organizations make the most out of big data that they have. Every one of these explanatory sorts offers a different insight. In this article we explore the four unique sorts of analytics- Descriptive Analytics, Diagnostic Analytics, Predictive Analytics and Prescriptive Analytics, to comprehend what each kind of analytics delivers to enhance an organization’s operational capabilities.</p>



<h4 class="wp-block-heading"><strong>Descriptive Analytics</strong></h4>



<p>Descriptive Analytics addresses the subject of what occurred. Having analyzed month to month income and pay per product group and the total quantity of metal parts made every month, a producer had the option to answer a progression of ‘what happened’ questions and settle on focus product categories.</p>



<p>Descriptive analytics shuffles raw information from numerous information sources to give important insights into the past. In any case, these discoveries essentially signal that something isn’t right or wrong, without clarifying why. Hence, data experts don’t prescribe exceptionally data-driven companies to choose descriptive analytics only, they’d preferably consolidate it with different kinds of data analytics.</p>



<h4 class="wp-block-heading"><strong>Diagnostic Analytics</strong></h4>



<p>A stage where the data assembled during descriptive analysis is compared against different measurements to discover why something happened. Diagnostic analysis permits organizations to distinguish irregularities, for example, unexpected spikes in sales on a given day or heavy changes in site traffic. Here, data experts need to single out the correct data sets to assist them with clarifying the inconsistency. Looking for the appropriate answer regularly includes drawing data from external sources. At the point when the required information is on the table, the analysts set up causal relationships and utilize various kinds of data analytics like probability theory, regression analysis, filtering, and other to find the answer.</p>



<p>With diagnostic analytics, a hotel chain would look at the demand for VIP suites in various locales or hotels in a district, while the insurance agency would, for instance, get insights into what age group utilizes dental treatment the most in the target area. In the interim, an online retail store may utilize diagnostic analytics to perceive what areas requested a specific product from new arrivals more.</p>



<h4 class="wp-block-heading"><strong>Predictive Analytics</strong></h4>



<p>The subsequent step in data reduction is predictive analytics. Breaking down past information trends and patterns can precisely educate a business about what could happen later on. This aids in defining practical goals for the business, effective planning and restraining expectations. Predictive analytics is utilized by organizations to contemplate the information and gaze into the crystal ball to discover answers to the question, “What could happen later on dependent on past patterns and trends?”</p>



<p>Predictive analytics predicts the probability of a future result by utilizing different statistical and machine learning algorithms yet the exactness of forecasts isn’t 100%, as it depends on probabilities. To make forecasts, algorithms take data and fill in the missing information with the most ideal speculations. This information is pooled with historical data present in the CRM systems, POS Systems, ERP and HR frameworks to search for data patterns and identify relationships among various variables in the dataset.</p>



<p>Companies like Walmart, Amazon and different retailers leverage predictive analytics to identify trends in sales based on purchase patterns of customers, forecasting customer behaviour, forecasting inventory levels, foreseeing what products clients are probably going to buy together with the goal that they can offer personalized recommendations, predicting the number of sales at the end of the quarter or year.</p>



<h4 class="wp-block-heading"><strong>Prescriptive Analytics</strong></h4>



<p>The purpose behind prescriptive analytics is to actually endorse what move to take to wipe out a future issue or exploit a promising pattern. Prescriptive analytics utilizes advanced tools and technologies, like machine learning, business rules and algorithms, which makes it modern to implement and manage. Moreover, this best in class type of data analytics requires historical internal data as well as external data because of the nature of algorithms it’s based on.</p>



<p>From one side, prescriptive analytics procedures can be utilized to pick up exceptionally rich bits of knowledge in customer behaviour across industries. On the other hand, machine learning algorithms can be trained to analyse stock markets and automate human decision making by introducing choices dependent on large amounts of internal and external data. Regardless, prescriptive analytics is an exorbitant investment: the financial investors should be certain that the analytics yields considerable advantages.</p>
<p>The post <a href="https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/">THE FOUR MAJOR TYPES OF DATA ANALYTICS</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/the-four-major-types-of-data-analytics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</title>
		<link>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/</link>
					<comments>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 11 Apr 2020 11:17:20 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[CHIPS]]></category>
		<category><![CDATA[NEUROMORPHIC]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=8128</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is <a class="read-more-link" href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moore’s law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.</p>



<p>The first generation of AI was marked with defining rules and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second generation was populated by using deep learning networks to analyze the contents and data that were largely concerned with sensing and perception. The third generation is about drawing parallels to the human thought process, like interpretation and autonomous adaptation. In short, it mimics neurons spiking like the nervous system of humans. It relies on densely connected transistors that mimic the activity of ion channels. This allows them to integrate memory, computation, and communication, at higher speed, complexity, and better energy efficiency.</p>



<p>Loihi is Intel’s fifth-generation neuromorphic chip. This 14-nanometer chip has a 60-millimeter die size and contains over 2 billion transistors, as well as three managing Lakemont cores for orchestration. It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli. On March 16, Intel and Cornell University showcased a new system, demonstrating the ability of this chip to learn and recognize 10 hazardous materials from the smell. And this can function even in the presence of data noise and occlusion. According to their joint profiled paper in Nature Machine Intelligence, this can be used to detect the presence of explosives, narcotics, polymers and other harmful substances like signs of smoke, carbon monoxide, etc. It can purportedly do this faster, more accurate than sniffer dogs thereby threatening to replace them. They achieved this by training it constructing a circuit diagram of biological olfaction. They drew this insight by creating a dataset by exposing ten hazardous chemicals (including acetone, ammonia, and methane) through a wind tunnel, and a set consisting of the activity of 72 chemical sensors collected the signals.</p>



<p>This tech has multifold applications like identifying harmful substances in the airport, detecting the presence of diseases and toxic fumes in the air. The best part is, it constantly re-wires its internal network to allow different types of learning. The futuristic version can transform traditional computers into machines that can learn from experience and make cognitive decisions. Hence it is adaptive like human senses. And to put a cherry on top, it uses a fraction of energy than the current state of art systems in vogue. It is predicted to displace Graphics Processing Units (GPUs).</p>



<p>Although Loihi may soon evolve into a household word, it is not the only one. The neuromorphic approach is being investigated by IBM, HPE, MIT, Purdue, Stanford, and others. IBM is in the race with its TrueNorth. It has 4096 cores, each having 256 neurons and each neuron having 256 synapses to communicate with others. Germany’s Jülich Research Centre’s Institute of Neuroscience and Medicine and UK’s Advanced Processor Technologies Group at the University of Manchester are working on a low-grade supercomputer called SpiNNaker. It stands for Spiking Neural Network Architecture. It is believed to stimulate so-called cortical microcircuits, hence the human brain cortex and help us understand complex diseases like Alzheimer’s.</p>



<p>Who knows what sort of computational trends we may foresee in the coming years. But one thing is sure, the team at Analytics Insight will keep a close watch on it.</p>
<p>The post <a href="https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/">NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/neuromorphic-chips-the-third-wave-of-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Training Driverless Cars Before They Hit the Highway</title>
		<link>https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/</link>
					<comments>https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 27 Mar 2020 06:55:20 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[autonomous]]></category>
		<category><![CDATA[Driverless cars]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7761</guid>

					<description><![CDATA[<p>Source: technologynetworks.com A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a <a class="read-more-link" href="https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/">Training Driverless Cars Before They Hit the Highway</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: technologynetworks.com</p>



<p>A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.<br><br>Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.<br><br>Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.<br><br>The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.<br><br>In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds.<br><br>“It’s tough to collect data in these edge cases that humans don’t experience on the road,” says first author Alexander Amini, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world.”<br><br>The work was done in collaboration with the Toyota Research Institute. Joining Amini on the paper are Igor Gilitschenski, a postdoc in CSAIL; Jacob Phillips, Julia Moseyko, and Rohan Banerjee, all undergraduates in CSAIL and the Department of Electrical Engineering and Computer Science; Sertac Karaman, an associate professor of aeronautics and astronautics; and Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.</p>



<h3 class="wp-block-heading"><strong>Data-driven simulation</strong></h3>



<p>Historically, building simulation engines for training and testing autonomous vehicles has been largely a manual task. Companies and universities often employ teams of artists and engineers to sketch virtual environments, with accurate road markings, lanes, and even detailed leaves on trees. Some engines may also incorporate the physics of a car’s interaction with its environment, based on complex mathematical models.</p>



<p>But since there are so many different things to consider in complex real-world environments, it’s practically impossible to incorporate everything into the simulator. For that reason, there’s usually a mismatch between what controllers learn in simulation and how they operate in the real world.</p>



<p>Instead, the MIT researchers created what they call a “data-driven” simulation engine that synthesizes, from real data, new trajectories consistent with road appearance, as well as the distance and motion of all objects in the scene.</p>



<p>They first collect video data from a human driving down a few roads and feed that into the engine. For each frame, the engine projects every pixel into a type of 3D point cloud. Then, they place a virtual vehicle inside that world. When the vehicle makes a steering command, the engine synthesizes a new trajectory through the point cloud, based on the steering curve and the vehicle’s orientation and velocity.</p>



<p>Then, the engine uses that new trajectory to render a photorealistic scene. To do so, it uses a convolutional neural network — commonly used for image-processing tasks — to estimate a depth map, which contains information relating to the distance of objects from the controller’s viewpoint. It then combines the depth map with a technique that estimates the camera’s orientation within a 3D scene. That all helps pinpoint the vehicle’s location and relative distance from everything within the virtual simulator.</p>



<p>Based on that information, it reorients the original pixels to recreate a 3D representation of the world from the vehicle’s new viewpoint. It also tracks the motion of the pixels to capture the movement of the cars and people, and other moving objects, in the scene. “This is equivalent to providing the vehicle with an infinite number of possible trajectories,” Rus says. “Because when we collect physical data, we get data from the specific trajectory the car will follow. But we can modify that trajectory to cover all possible ways of and environments of driving. That’s really powerful.”</p>



<h3 class="wp-block-heading"><strong>Reinforcement learning from scratch</strong></h3>



<p>Traditionally, researchers have been training autonomous vehicles by either following human defined rules of driving or by trying to imitate human drivers. But the researchers make their controller learn entirely from scratch under an “end-to-end” framework, meaning it takes as input only raw sensor data — such as visual observations of the road — and, from that data, predicts steering commands at outputs.</p>



<p>“We basically say, ‘Here’s an environment. You can do whatever you want. Just don’t crash into vehicles, and stay inside the lanes,’” Amini says.</p>



<p>This requires “reinforcement learning” (RL), a trial-and-error machine-learning technique that provides feedback signals whenever the car makes an error. In the researchers’ simulation engine, the controller begins by knowing nothing about how to drive, what a lane marker is, or even other vehicles look like, so it starts executing random steering angles. It gets a feedback signal only when it crashes. At that point, it gets teleported to a new simulated location and has to execute a better set of steering angles to avoid crashing again. Over 10 to 15 hours of training, it uses these sparse feedback signals to learn to travel greater and greater distances without crashing.</p>



<p>After successfully driving 10,000 kilometers in simulation, the authors apply that learned controller onto their full-scale autonomous vehicle in the real world. The researchers say this is the first time a controller trained using end-to-end reinforcement learning in simulation has successful been deployed onto a full-scale autonomous car. “That was surprising to us. Not only has the controller never been on a real car before, but it’s also never even seen the roads before and has no prior knowledge on how humans drive,” Amini says.</p>



<p>Forcing the controller to run through all types of driving scenarios enabled it to regain control from disorienting positions — such as being half off the road or into another lane — and steer back into the correct lane within several seconds. “And other state-of-the-art controllers all tragically failed at that, because they never saw any data like this in training,” Amini says.</p>



<p>Next, the researchers hope to simulate all types of road conditions from a single driving trajectory, such as night and day, and sunny and rainy weather. They also hope to simulate more complex interactions with other vehicles on the road. “What if other cars start moving and jump in front of the vehicle?” Rus says. “Those are complex, real-world interactions we want to start testing.”

</p>
<p>The post <a href="https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/">Training Driverless Cars Before They Hit the Highway</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/training-driverless-cars-before-they-hit-the-highway/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>STATE OF ARTIFICIAL INTELLIGENCE IN INDIA</title>
		<link>https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/</link>
					<comments>https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 21 Mar 2020 05:32:10 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[State]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7605</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net In June 2018, India’s national think-tank, the NITI Aayog, released a discussion paper on the transformative potential of Artificial Intelligence (AI) in India. This paper <a class="read-more-link" href="https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/">STATE OF ARTIFICIAL INTELLIGENCE IN INDIA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>In June 2018, India’s national think-tank, the NITI Aayog, released a discussion paper on the transformative potential of Artificial Intelligence (AI) in India. This paper said the country could add US$1 trillion to its economy through integrating AI. Since then, some of the biggest moves made by the government to act on this prediction is the formation of a task force on Artificial Intelligence for India’s Economic Transformation by the Commerce and Industry Department of the Government of India in 2017, and the Union Cabinet in December 2018.</p>



<p>These bodies approved an INR3,660 crore national mission on cyber-physical system technologies that involves extensive use of AI, machine learning, deep learning, big data analytics, quantum computing, quantum communication, quantum encryption, data science and predictive analytics.</p>



<p>But, what has been the progress in the nation since these ambitious missions were undertaken by the government? According to an analysis by research agency Itihaasa, the progress has been appreciable. When the agency used the number of ‘citable documents’, or the number of research publications in peer-reviewed journals, in the field of AI between 2013 and 2017 as a metric, India ranked third in terms of high quality research publications in Artificial Intelligence.</p>



<p>However, when parsed by another metric (citations, or the number of times an article is referred), India ranked only fifth behind the UK, Canada, the US and China which suggests that India must shift its focus to improving the quality of its research output in AI. The report also revealed that the Indian Institutes of Technology and the Indian Institutes of Information Technology were among the primary research centres for AI.</p>



<p>Currently, most of the traction in India is in the form of AI pilot projects from the government in agriculture and healthcare, and the emergence of AI startups in cities like Bangalore and Hyderabad. Though these are indications of grassroots level AI adoption, the pace of innovation isn’t comparable to the USA or China today.</p>



<p>Some challenges that the progress of AI in India faces are limited availability of manpower and of good quality and clean data, as there is no institutional mechanism to maintain high quality data. A report published by PwC in 2018 revealed another imminent challenge-that even with all the potential benefits of AI, which are envisaged to aid humans, people still have concerns regarding data privacy and are apprehensive to share data for a better experience. A vast majority of participants agree that they have major concerns regarding data privacy to the point that it is near unanimous (93%) and that they are hesitant to even share medical results knowing that it could help provide some personalised knowledge about their health, so data protection still remains a hazy domain hindering the growth of AI.</p>



<p>Another cultural challenge that India faces is the fact that the cost of failure is much higher here than in the West. While failing in an attempt at big innovation and grand goals might be seen as brave in Silicon Valley, failure often implies a loss of face in India and this has historically meant a lack of room for experimentation. All these challenges tell us that even with government funding and industry participation, India is just at the starting point of what seems to be a promising long road.</p>
<p>The post <a href="https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/">STATE OF ARTIFICIAL INTELLIGENCE IN INDIA</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/state-of-artificial-intelligence-in-india/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>HOW AI AND NEUROSCIENCE CAN HELP EACH OTHER PROGRESS?</title>
		<link>https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/</link>
					<comments>https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 06 Mar 2020 06:11:52 +0000</pubDate>
				<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[DeepMind]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=7269</guid>

					<description><![CDATA[<p>Source: analyticsinsight.net Artificial Intelligence has progressed immensely in the past few years. From being just a fiction context to penetrating into the regular lives of people, AI <a class="read-more-link" href="https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/">HOW AI AND NEUROSCIENCE CAN HELP EACH OTHER PROGRESS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: analyticsinsight.net</p>



<p>Artificial Intelligence has progressed immensely in the past few years. From being just a fiction context to penetrating into the regular lives of people, AI has brought transformation in several ways. Such advancements are an output of various factors that include the application of new statistical approaches and enhanced computing powers. However, according to 2017 report by DeepMind, a Perspective in the journal Neuron, argues that people often discount the contribution and use of ideas from experimental and theoretical neuroscience.</p>



<p>The DeepMind report’s researchers believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. They said, “Put simply if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track.” Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models.</p>



<p>Moreover, in a recent blog post, DeepMind suggests that the human brain and AI learning methods are closely linked when it comes to learning through reward.</p>



<p>Computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.</p>



<p>As noted by the post, a recent development in computer science – which yields significant improvements in performance on reinforcement learning problems – may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brain’s dopamine system, with potential implications for learning and motivation disorders.</p>



<p>DeepMind found that dopamine neurons in the brain were each tuned to different levels of pessimism or optimism. If they were a choir, they wouldn’t all be singing the same note, but harmonizing – each with a consistent vocal register, like bass and soprano singers. In artificial reinforcement learning systems, this diverse tuning creates a richer training signal that greatly speeds learning in neural networks, and researchers speculate that the brain might use it for the same reason.</p>



<p>The existence of distributional reinforcement learning in the brain has interesting implications both for AI and neuroscience. Firstly, this discovery validates distributional reinforcement learning – it gives researchers increased confidence that AI research is on the right track since this algorithm is already being used in the most intelligent entity they are aware of: the brain.</p>



<p>Therefore, a shared framework for intelligence in context to artificial intelligence and neuroscience will allow scientists to build smarter machines, and enable them to understand humankind better. This collaborative drive to propel both could possibly expand human cognitive capabilities while bridging the gap between humans and machines.</p>
<p>The post <a href="https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/">HOW AI AND NEUROSCIENCE CAN HELP EACH OTHER PROGRESS?</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-ai-and-neuroscience-can-help-each-other-progress/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Role Played by Artificial Intelligence and RPA in transformation of the business world</title>
		<link>https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/</link>
					<comments>https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 21 Feb 2020 05:11:34 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Reinforcement Learning]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6939</guid>

					<description><![CDATA[<p>Source: dqindia.com Over the past few decades, technology-based innovations have added a new meaning to business conversations, thus changing the way we live and work in the <a class="read-more-link" href="https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/">Role Played by Artificial Intelligence and RPA in transformation of the business world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: dqindia.com</p>



<p>Over the past few decades, technology-based innovations have added a new meaning to business conversations, thus changing the way we live and work in the digital age. With increased focus on improving business agility and performance, to sail through the challenging age of digital transformation, it is in the interest of modern businesses to leverage these cutting-edge technologies to maximize operational efficiencies and sustain profitability. The journey so far and how new technologies will impact key verticals in the coming years.</p>



<h4 class="wp-block-heading"><strong>Emerging Tech Impact</strong></h4>



<p>The age of transformation will witness the emergence of varied possibilities and use-cases with the intermingling of new age technologies like Artificial Intelligence, IoT, automation and analytics. For example, smart cities, self-driving cars, healthcare, manufacturing and retail (automation, preventive maintenance, predictive analytics, R&amp;D, realtime operations and supply chain management).</p>



<p>While Artificial Intelligence has many use cases, practically in every field, we would see that the best end-to-end application of Artificial Intelligence will happen in conjunction with automation. Be it intelligent automation, smart automation or cognitive automation, we are already seeing a blueprint of products and services that combine AI/ML, data models, orchestration and automation to address the end-to- end transformation journey of an enterprise.</p>



<p>The impact of automation in re-inventing work is profound for both business and IT users. Automation started its journey as a cost reduction tool using Robotic Desktop Automation, a human-triggered deterministic approach to process structured data with minimal context awareness and without any ability to learn. RPA expanded the data range to semi-structured data and started running through process orchestrations. Next, autonomics added context awareness to automation’s armory, but only limited to its computing environment.</p>



<p>The current state is a step beyond autonomics, where automation gains cognitive abilities with learning capability and a probabilistic processing approach. The inclusion of Artificial Intelligence has expanded the horizon of automation tools by enabling them to process unstructured data.</p>



<p>Even though most organisations today are only using natural language processing NLP as a cognitive tool, as the automation journey progresses towards true value amplification, there will be a need to deploy future automation tools with advanced AI capabilities like computer vision, speech recognition, etc. As enterprises start to consume the above levers, we will see more and more industry-specific use cases approach maturity. Automation bots can be called experts in their respective industries. They can extract data, query databases, process the data, match them against given criteria and enter data into system. The workflow, analytics and configuration are specifically tailored to suit the needs of the industry and offer a plug-and-play with minimal customizations.</p>



<h4 class="wp-block-heading"><strong>How are new digital technologies changing the business landscape?</strong></h4>



<ul class="wp-block-list"><li><strong>Banking, financial services and insurance (BFSI)</strong>&nbsp;segment accounts for the largest revenue share and is expected to continue to grow. This segment is driven on automation of business tasks ranging from data entry, compliance regulations amongst others, resulting not only in enhanced speed and efficiency, but also comprehensive insights.</li><li><strong>Combination of light detection and ranging (LiDAR)</strong>, edge computing and computer vision solution for automated factory floor monitoring. Factory maps are being used by autonomous mobile robots (AMRs) to dynamically navigate the mapped premises. Use cases have emerged that deploy computer vision ML techniques for object detection, classification, monitoring, path planning and besides other range of factory operations.</li><li><strong>An intelligent automation solution comprising of ML-driven text analytics and RPA components</strong>: It enables users to extract data from the huge corpus of legal documents for processing. It also allows functional and department-specific classification and tagging of meta data. Reinforcement learning to retrain the ML model, and eventually improving the accuracy. The final step is with a hand-off to RPA for effective and efficient contract management.</li><li><strong>Heat map generation using AI/ML</strong>: Wide variety of applicability of this use case across industries such as retail, transportation, public services (traffic monitoring), etc. There will be the known movement and concentration of people on real-time basis in a confined space. This helps with guiding the movement/ flow of people, efficient space utilisation, improving layout design, security and theft monitoring, traffic congestion management, etc. Adding an IoT component to this solution can further open arenas to build a robust smart city solution. Just like an expert golfer knows, which club to use, and the required power to push the ball to reach the hole, similarly, to achieve success in the end-to-end business automation, we need to apply the right technology and strategy, using a suite of solutions like RPA, AI, NLP and optical character recognition (OCR).</li></ul>
<p>The post <a href="https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/">Role Played by Artificial Intelligence and RPA in transformation of the business world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/role-played-by-artificial-intelligence-and-rpa-in-transformation-of-the-business-world/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Big Data, Big Risks: Addressing the High-Tech &#038; Telecoms Threat Landscape</title>
		<link>https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/</link>
					<comments>https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 25 Jan 2020 09:32:31 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Big data]]></category>
		<category><![CDATA[drives intelligent]]></category>
		<category><![CDATA[High-Tech]]></category>
		<category><![CDATA[telecoms]]></category>
		<category><![CDATA[Threat Landscape]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=6369</guid>

					<description><![CDATA[<p>Source: infosecurity-magazine.com The benefits of industry 4.0 have been well reported and the world of work has been revolutionized. Almost every organization operating today actively utilizes, or <a class="read-more-link" href="https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/">Big Data, Big Risks: Addressing the High-Tech &#038; Telecoms Threat Landscape</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: infosecurity-magazine.com</p>



<p>The benefits of industry 4.0 have been well reported and the world of work has been revolutionized. Almost every organization operating today actively utilizes, or relies on, technologies that are becoming increasingly advanced. Both industry and society have adopted a data-driven culture in which information drives intelligent decision making and previously unheard-of efficiencies.</p>



<p>High-tech and telecoms organizations stand at the forefront of this transformation in a unique yet vulnerable position. In simple terms, technological development is outpacing the ability of many organizations to adequately address the subsequent risks it creates.</p>



<p>A gap has developed between the adoption of sophisticated technologies and protection against advanced threats. It is, therefore, important for organizations to assess the threat landscape to develop effective strategies and systems to minimize risk.</p>



<p>Below are six focus areas that represent significant threats to the high-tech and telecoms sectors.</p>



<p><strong>Privacy and Data Protection</strong></p>



<p>The high-tech and telecoms sectors are data-rich. Processing and storing extremely high volumes of personal information directly correlates to optimum service delivery and revenue generation.</p>



<p>As the regulatory landscape changes, the methods organizations use and how they protect customer data is being scrutinized. As various countries implement new and inherently different privacy regulations, regulatory compliance is now dependent on an ability to satisfy extensive and varied requirements. Failing to do so can have severe financial and reputational consequences.</p>



<p><strong>Device Threats</strong></p>



<p>In general, the risk appetite of high-tech and telecoms organizations and the people they employ is high. Internally, the collaborative and creative environments often cited when referring to high-tech and telecoms organizations pose a significant risk. For example, the latest mobile devices, apps and technologies celebrated by early-adopting employees are far more likely to have security flaws.</p>



<p>Vulnerable devices connected to the network by employees can introduce any number of malicious threats capable of causing limitless damage.</p>



<p><strong>Cloud Security</strong></p>



<p>Regardless of sector, cloud technology is increasingly relied upon for multiple business operations and is normally managed by external cloud service providers. Organizations have less control of these operations, and adequate threat response relies on effective contractual and service-level agreements, which dictate requirements and expectations.</p>



<p><strong>The Internet of Things (IoT)</strong></p>



<p>The wide-ranging adoption of IoT devices by both consumers and enterprise, as well as the exceptional volume of devices being produced, represents an increasingly high-impact threat. Many IoT-related threats are the result of poorly configured devices developed by manufacturers who, in some cases, may have had little regard for security. Unsecured devices connected to the networks of high-tech and telecoms organizations can make them vulnerable to attack.</p>



<p><strong>The Human Element</strong></p>



<p>When addressing information security, there is often a tendency to be drawn to technological threats and regulatory failures. As with many other sectors, high-tech and telecoms organizations must recognize human threats, which take many forms. Insider threat, social engineering and process failure all signify significant risks with multiple well-publicized incidents in the last year alone.</p>



<p><strong>Supply Chain</strong></p>



<p>High-tech and telecoms organizations have global supply chains that are extensive and complex. These supply chains inherit the vulnerabilities of their suppliers and are often exploited by attackers to get to their intended target. This threat places a focus on the enforcement of processes and controls designed to minimize the risks associated with third-party suppliers. In the high-risk, high-tech and telecoms environment, organizations must understand who they are doing business with and what needs to be done to minimize the risks they may pose.</p>



<p>For organizations operating in the high-tech and telecoms sectors, an effective information security management strategy and system that evolves with the threat landscape has never been more important.</p>
<p>The post <a href="https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/">Big Data, Big Risks: Addressing the High-Tech &#038; Telecoms Threat Landscape</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/big-data-big-risks-addressing-the-high-tech-telecoms-threat-landscape/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>‘Seeing is believing’ for digital transformation success</title>
		<link>https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/</link>
					<comments>https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 04 Jan 2020 07:11:25 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[applications]]></category>
		<category><![CDATA[digital]]></category>
		<category><![CDATA[Metadata]]></category>
		<category><![CDATA[Transformation]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=5968</guid>

					<description><![CDATA[<p>Source: itproportal.com The 1994 fan-favourite film, The Santa Clause, taught us that seeing isn’t believing, believing is seeing. Well, it turns out little elf Judy was wrong <a class="read-more-link" href="https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/">‘Seeing is believing’ for digital transformation success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Source: itproportal.com</p>



<p>The 1994 fan-favourite film, The Santa Clause, taught us that seeing isn’t believing, believing is seeing. Well, it turns out little elf Judy was wrong on that when it comes to future-proofing a digital enterprise. Faith alone simply doesn’t cut it for NetOps and SecOps pros responsible for making sure their infrastructure is as robust and secure as it needs to be. In fact, harnessing the power of sight couldn’t be more important to digital infrastructure design strategy.</p>



<p>Holiday whimsy aside, this lesson has major significance for the 87 per cent of senior business leaders who believe digitisation is a top priority, a journey arguably as challenging as dropping gifts through chimneys across the globe. Among the goals of digital transformation (DX) are streamlining business processes, cutting costs, improving productivity and introducing new business models that redefine industries — or build new ones. These are ambitious goals, and they’re ultimately realised through new digital applications built on intricate microservices-based architectures.</p>



<p>For examples of successful DX applications look no further than ride sharing apps like Uber and Lyft, or home sharing apps like Airbnb. In the financial world, consider robo-advisors, peer-to-peer lending services, crowdfunding campaigns and cryptocurrencies — these are all DX innovations. Applications built around connected Internet of Things (IoT) devices can also fit into the DX category, as can industry-specific solutions, such as Industrial Control Systems (ICS), Supervisory Control and Data Acquisition (SCADA) and medical systems protocols.</p>



<p>Digital transformation (DX) spending is predicted to reach nearly $2 trillion in 2022, while organisations grapple with public, private, and hybrid cloud infrastructure and multi-tier applications that need to be managed and secured. Seeing – in the form of application-centric visibility – is key to understanding application interaction and usage patterns, in order to accelerate DX.</p>



<p>Like Christmas songs playing on a loop, it bears repeating that seeing to believe the current state of your environment, and its network and application layers, is critical for guiding DX efforts, and reducing risks, costs and complexity along the way. Here are five reasons why seeing is believing for DX success:</p>



<h2 class="wp-block-heading" id="metadata-matters">Metadata matters</h2>



<p>1. At the heart of DX are new digital applications—These applications are powering the transformation. They are typically built leveraging a microservices architecture and deployed over hybrid infrastructure (physical, virtual, cloud), and are consumed on a myriad of mobile devices.<br> </p>



<p>2. Digital applications introduce increased complexity into the infrastructure—A key challenge that arises when dealing with this new breed of digital applications is their complexity, which ultimately manifests as challenges in securing the applications, as well as ensuring consistent performance and user experience. In fact, a recent survey by Forrester of 1,000 CISOs worldwide, found that IT complexity is their number one challenge.<br></p>



<p>3. You need a clear vantage point from which to see these applications—Only with a clear view, and the right kind of data, can you then understand the applications’ interactions, performance, and security characteristics. The only way to get to this view is to look at the data in motion on the network. Deploying agents across the complex hybrid infrastructure is not feasible (for example, you can’t deploy an agent to a third-party microservice or to an industrial control system). And this visibility must stretch across on-premise, private, public and multi-cloud environments because, as the saying goes, ‘if you can’t see it, you can’t secure it.’<br></p>



<p>4. Isolating communication streams allows for deeper inspection—Once you understand what applications and microservices are running across your hybrid infrastructure, it’s fair to assume that in deployment, something at some point is bound to go awry. You need to figure out what&#8217;s happening and quickly course correct, but when you&#8217;re scaling microservices, it&#8217;s hard to troubleshoot just through application instrumentation. The ability to isolate specific applications or microservices communication streams for deeper inspection allows SecOps teams to easily understand access patterns and put in place effective micro segmentation strategies. Application developers too can benefit from this by better understanding communication bottlenecks between applications and microservices, as well as troubleshoot applications and microservices.<br></p>



<p>5. Application metadata matters—The ability to extract metadata pertaining to those applications or services of interest can then fill gaps from a compliance, risk and performance perspective. Specifically, application metadata intelligence offers contextual data needed to quickly pinpoint potential threats and resolve network or application performance issues that can impact the user experience. </p>



<p>Accelerating secure deployment of digital applications is equivalent to catching the Christmas spirit for NetOps and SecOps pros tasked with complex infrastructure development, implementation, maintenance and security. While these modern, multi-tiered DX applications bring agility and innovative capabilities, their complexity makes monitoring and securing them difficult, if not impossible, with ineffective network visibility. This puts the success of these applications in particular — and digital transformation projects in general — at risk. It’s that Rudolph-grade vision into the organisation&#8217;s network and application layers that’s necessary to enable you to visualise your infrastructure, what&#8217;s running on it, and how applications are performing and interacting with each other and if they’re secure. Once you can truly see the data in motion on your hybrid infrastructure, believe me, the path to digital innovation becomes a whole lot easier.</p>
<p>The post <a href="https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/">‘Seeing is believing’ for digital transformation success</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/seeing-is-believing-for-digital-transformation-success/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
