<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>application development Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/application-development/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/application-development/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 21 Aug 2018 05:58:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Securing microservice environments in a hostile world</title>
		<link>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/</link>
					<comments>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 21 Aug 2018 05:58:51 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[cloud-native]]></category>
		<category><![CDATA[Microservice]]></category>
		<category><![CDATA[microservices deployment]]></category>
		<category><![CDATA[security mechanisms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2768</guid>

					<description><![CDATA[<p>Source &#8211; networkworld.com At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The <a class="read-more-link" href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Securing microservice environments in a hostile world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; networkworld.com</p>
<p>At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.</p>
<p>In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.</p>
<p>Within a microservices deployment, one must assume that the perimeter is breachable.  Traditional security mechanisms only provide a layer of security for a limited number of threats. Such old-fashioned mechanisms are unable to capture the internal bad actors where most compromises occur. Therefore, it is recommended to deploy multiple security layers and employ zero trust as the framework. This way, the new perimeter and decision point will be at the microservice.</p>
<p>In this day and age, we must somehow enforce separation along with consistent policy between the services, while avoiding the perils of traditional tight coupling, and not jeopardize security. We need to find a solution so that the policy is managed centrally. However, at the same time, the policy should be enforced in a distributed fashion to ensure the workloads perform as designed and do not get compromised.</p>
<aside id="" class="nativo-promo nativo-promo-1 smartphone"></aside>
<h2>The cost of agility</h2>
<p>The two main drivers for change are agility and scale. Within a microservices environment, each unit can scale independently, driving massively scalable application architectures. Yet, this type of scale was impossible when it was necessary to couple heavy data services along with the application.</p>
<p>The ability to scale and react rapidly increases business velocity, which allows organizations to reap the benefit in terms of cost and resilience and also improved ways of managing and building the application. However, the decentralized nature of agile deployments introduces challenges in terms of governance.</p>
<p>We should keep in mind that we now have a distributed organization with sub-teams responsible for individual microservices, In addition, the patching and updates are carried out in real time. Eventually, this creates a gap that needs to be filled. The gap consists of visibility and the ability to scale policy in a distributed fashion.</p>
<h2>Complexity is the enemy of security</h2>
<p>The cloud-native approach introduces considerable complexity. Besides complexity, the end user is responsible for securing their own environment. With microservices, there are many more moving pieces and paths of communication, introducing complexity that must be managed. We need to manage this complexity while keeping the holistic view of the application and visibility as to how each component is operating.</p>
<aside id="" class="nativo-promo nativo-promo-1 tablet desktop"></aside>
<p>The attempt to secure a complex deployment using existing tools does not work and leads to a complicated security solution with complex policies. Complexity is the enemy of security. As security solutions become more complex, they become unmanageable and less secure. There is a requirement for a new unified security framework that can adapt to the different microservice environments while still providing full visibility along with simplified policy management.</p>
<p>You really need to know who is talking at any given point, authenticate the source, and authorize the type of transaction the API communication is trying to do. You should be aware of the specific communication, which is going on within these channels and what should be authenticated and authorized to communicate.</p>
<p>This is impossible to do efficiently unless changes are introduced to the microservices architecture. Microservice deployments are susceptible to an array of security threats. API vulnerabilities, logic attacks, lateral movements, and the inadequacy of traditional security tools bring the systems to a halt like a house of cards tumbling down.</p>
<h2>Diverse traffic patterns</h2>
<p>Today’s traffic patterns are different from those of the past. Nowadays, there are a lot of APIs connecting inbound and outbound along with internal communication. The APIs are all public, open and customer facing. The administrators are permitting this type of communication in and out of the public and private data center.</p>
<p>There is typically considerable asymmetry between the front-end ingress API and the backend APIs. Considering the customer environment, there is an initial consumer API call, but that propagates numerous other backend API calls to carry out, for example, user and route lookups.</p>
<p>As the microservice environment develops more components, it is difficult to monitor and make sure everything is secure. The deployment of web application firewalls (WAFs) to secure the public APIs and the use of next-gen firewalls filtering at strategic network points cover only a part of the attack surface. We must still assume that the perimeter is breachable along with the high potential for internal bad actors.</p>
<h2>Traditional security mechanisms fall short</h2>
<p>The network perimeter was born in a different time and traditional security mechanisms based on Internet Protocol (IP) and 5-tuple no longer suffice. The traditional perimeter consists of the virtual or physical appliance such as a firewall, IPS/IDS or API gateway located at strategic network points. Actually, the traditional perimeter with its traditional security mechanisms only provides the first layer of security. Even though it is labeled as defense in depth, still it is far from reaching that status in a microservices environment.</p>
<p>For example, API gateways are meant to manage the inbound calls. APIs are registered with the API gateway, which changes the workflow. They don’t scale in a microservices environment where there could be hundreds of services, each one exposing a number of API’s and each service containing multiple instances.</p>
<p>The API gateway needs to scale not just with the external traffic, but also with east to west internal traffic. This is the order of magnitude that holds the significant share of total traffic. Web application firewalls (WAFs) do not change the workflow but they share some of the challenges of API gateways. It is impossible to create and manage security when policies are not distributed to the workloads. There is a lot of work to be done for just a limited number of APIs that exponentially grow with internal communications. This is clearly not practical in case of microservices deployments.</p>
<p>Next-gen firewalls are typically the central security resource. They are more suitable for north to south traffic flows but not for internal east to west. In this world where everything is HTTP, firewalls do not offer the best visibility and access control. A firewall typically forces security on the source and destination IP and protocols, but in a microservices environment, the regular communication port is 80/443. It is very common for all to use the same port and protocol for communication.</p>
<p>For this to work, the firewall would need to follow the identity of the source and destination IP address along with the source and destination port numbers. It should have the ability to deal with an orchestration system, changing the identity all the time.</p>
<p>Enforcement should be done in a distributed fashion, right down at the workload level. If what you are monitoring and protecting is accessible to the application and application behavior, it matters less where the attacks come from. However, security frameworks based on traditional mechanisms can leave many avenues for bad actors to camouflage their attacks.</p>
<h2>The larger attack surface requires a new perimeter</h2>
<p>If the security cannot follow the microservices, you need to bring the security to the microservice and embed the security with the microservice. An effective perimeter is the decision point at the microservice that is not set at the strategic points located within the network. The new perimeter is at the microservice layer and everywhere that has an API. This is the only way to protect, especially when it comes to logic attacks.</p>
<p>Logic attacks become more prevalent in a microservices environment. This type of threat is carried out by a sophisticated attacker, not a script kiddy using a readily accessible tool. They take their time to penetrate into the perimeter to silently explore the internal environment and to go unnoticed while accessing valuable assets.</p>
<p>Cloud native applications expose their logic in multiple layers, not just one. Each microservice exposes some application logic through an API and these APIs if not efficiently secured, can be manipulated by a bad actor. A practical example would be an API that is meant to give you information about a single entry in a database. If the bad actor is able to modify the query slightly, they can pull multiple entries in the database that they do not have authority to access. Now, this can be exploited in every single API, which presents a much larger attack surface than before. This can bring about the opportunity for advanced persistent threat (APT).</p>
<p>As stated earlier, the distributed architectures give a much larger surface area. Each one of these small components is exposed to threats. The surface area is the sum of all APIs and the interactions both internal and external of the application. If you examine an API that is exposed to the outside, you would see hundreds of API calls. This offers numerous ways to exploit the vulnerabilities of an externally facing API. Within a kill-chain, the API is not just used to gain access but also as a way to perform lateral movements.</p>
<p>We also have challenges with traffic encryption. A large part of security in the new age of east to west traffic is the ability to have everything encrypted. It is the role of the application to perform the encryption.</p>
<p>In a microservices environment, key management is a difficult task. Besides, IPsec has a very coarse granularity. If you are looking for encryption with finer granularity in this environment then you need a new type of solution.</p>
<h2>Solution components: identity</h2>
<p>Workloads can be encapsulated in a number of ways such as a virtual machine (VM), bare metal or container. As a result, what’s required is a mechanism to provide a provable and secure identity to the application, not just to the server or container but also to the actual workload that is running. Ideally, identity can be a list of attributes. Think of them as the key-value pairs that describe an object to the level of detail that you want. Indeed, the more detail you have, the better.</p>
<p>The process of providing identity to a service is called identity bootstrapping. You have to trust something in order to provide application identity i.e. there needs to be an external source of truth. Companies such as AWS, VMware, and Octarine provide this by integrating with the orchestration system.</p>
<p>The orchestration system could be anything from vCenter to AWS ECS to Kubernetes. The system monitors events of new workloads being spawned. After validating the newly spawned workload, the workload is provided with the credentials it needs to prove its identity. After validating that it is legitimate, the newly spawned workload is provided with the credentials it needs to prove its identity. This way, the secrets are never kept in the code, container image, or in kubernetes.</p>
<h2>Solution components: visibility</h2>
<p>Once the identity is taken care of, we must create security based on the secured identity. How do you enforce policy and how does it get represented when you communicate it to something else?</p>
<p>Firstly, you need to rely on the application identity and monitoring traffic at layer 7. This is because on every API call the caller identifies oneself. You can add the identity on the client side and server side, validate the identity and log the API call to a central system.</p>
<p>The central system aggregates all API calls in all deployments for the customer&#8217;s environment and provides extensive visibility. This visibility extends over time to include the history of any changes. Such visibility is useful in agile environments.</p>
<h2>Solution components: anomaly detection</h2>
<p>You must try as much as possible to enforce the policy at the endpoint. However, sometimes in order to detect sophisticated attacks, you have to correlate multiple sequences, such as time of the day, payloads, and geographic access patterns.</p>
<p>Anomaly detection is needed that comprises a component responsible for looking at all the signals at a given time. Further, it should recognize small deviations from the baseline that could not be detected if you are looking at a single endpoint.</p>
<h2>Solution components: policy</h2>
<p>In the past, policy presented two prevailing themes &#8211; ACL distributed policy solutions and segmentation based on VLANs. You really need to start to think about policy being centrally administered but highly scalable and distributed by way of enforcement.</p>
<p>The policy should be based on workload identity and not network identity. With cloud-native, there is no alignment between the identity of a workload and its network identity. You cannot enable security enforcement through pre-defined network policies laid on traditional means.</p>
<p>The policy should also be driven by visibility, enabling feedback about policy and information about violations. This would allow the administrators to update the policy as required.</p>
<p>The promise of cloud-native applications and agile environments has many benefits for business. Cloud-native deployments left to the default lacks proper security tools and methodologies. However, with a guarded approach, you can reach a secure cloud-native agile environment.</p>
<p>The post <a href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Securing microservice environments in a hostile world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
			</item>
		<item>
		<title>OverOps Brings Machine Learning to DevOps</title>
		<link>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/</link>
					<comments>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Fri, 17 Aug 2018 05:58:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[application programming]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[Machine learning]]></category>
		<category><![CDATA[ML algorithms]]></category>
		<category><![CDATA[OverOps]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2746</guid>

					<description><![CDATA[<p>Source &#8211; devops.com OverOps has launched a namesake platform employing machine learning algorithms to capture data from an IT environment that identify potential issues before a DevOps team <a class="read-more-link" href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">OverOps Brings Machine Learning to DevOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; devops.com</p>
<p>OverOps has launched a namesake platform employing machine learning algorithms to capture data from an IT environment that identify potential issues before a DevOps team decides to promote an application into production.</p>
<p>Company CTO Tal Weiss said the OverOps Platform is unique in that, rather than relying on log data, it combines static and dynamic analysis of code as it executes to detect issue. That data then can be accessed either via dashboards or shared with other tools via an open application programming interface (API). The dashboards included with the OverOps Platform are based on open source project Grafana software.</p>
<p>That approach makes it possible to advance usage of artificial intelligence (AI) within IT operations without necessarily requiring that every tool in a DevOps pipeline be upgraded to include support for machine learning algorithms, Weiss said.</p>
<p>OverOps also includes in the platform access to an AWS Lambda-based framework or separate on-premises serverless computing framework to enable DevOps teams to also create their own custom functions and workflows.</p>
<p>Weiss said OverOps is designed to capture machine data about every error and exception at the moment they occur, including details such as the value of all variables across the execution stack, the frequency and failure rate of each error, the classification of new and reintroduced errors and the associated release numbers for each event. Log data is, by comparison, relatively shallow in that it is challenging to determine precise root cause analysis when trying to troubleshoot an issue, he said, noting the OverOps Platform offers visibility into the uncaught and swallowed exceptions that would otherwise be unavailable in log files.</p>
<p>DevOps teams spend an inordinate amount of time analyzing log files in the hopes of discovering an anomaly. But as IT environments continue to scale out, the practicality of analyzing millions, possibly even billions, of log files becomes impractical. OverOps is making the case for employing machine learning algorithms to analyze events before the log file is even created, which eliminates the need to find some way to store log files before they can be analyzed.</p>
<p>There’s naturally a lot of trepidation when it comes to anything to do with machine learning algorithms and other form of AI to manage IT. But as the complexity of IT environments continues to increase, it’s clear DevOps teams will need to rely more on AI to mange IT at levels of scale that were once considered unimaginable. For example, while microservices based on containers may accelerate the rate at which applications can be developed and updated, they also can introduce a phenomenal amount of operational complexity. Most DevOps professionals would rather automate as much as possible the manual labor associated with operations, especially if that leads to more certainty about the quality of the software being promoted into a production environment.</p>
<p>Of course, while making use of machine learning algorithms to analyze code represents a step forward in terms of automation, it’s still a very long way from eliminating the need for DevOps teams altogether.</p>
<p>The post <a href="https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/">OverOps Brings Machine Learning to DevOps</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/overops-brings-machine-learning-to-devops/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Building a data science pipeline: Benefits, cautions</title>
		<link>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/</link>
					<comments>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 30 Jun 2018 05:50:42 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Digital Business]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2543</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline &#8212; a sequence of actions <a class="read-more-link" href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Building a data science pipeline: Benefits, cautions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; techtarget.com</p>
<p>Enterprises are adopting data science pipelines for artificial intelligence, machine learning and plain old statistics. A data science pipeline &#8212; a sequence of actions for processing data &#8212; will help companies be more competitive in a digital, fast-moving economy.</p>
<div class="ad-wrapper ad-embedded">
<div id="halfpage" class="ad ad-hp" data-google-query-id="CIqgk4zX-tsCFU7SjgodE7wDeQ">
<div id="google_ads_iframe_/3618/scio/NEWS_3__container__"><iframe id="google_ads_iframe_/3618/scio/NEWS_3" title="3rd party ad content" name="google_ads_iframe_/3618/scio/NEWS_3" width="300" height="251" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" data-mce-fragment="1"></iframe></div>
</div>
</div>
<p>Before CIOs take this approach, however, it&#8217;s important to consider some of the key differences between data science development workflows and traditional application developmentworkflows.</p>
<p>Data science development pipelines used for building predictive and data science models are inherently experimental and don&#8217;t always pan out in the same way as other software development processes, such as Agile and DevOps. Because data science models break and lose accuracy in different ways than traditional IT apps do, a data science pipeline needs to be scrutinized to assure the model reflects what the business is hoping to achieve.</p>
<p>At the recent Rev Data Science Leaders Summit in San Francisco, leading experts explored some of these important distinctions, and elaborated on ways that IT leaders can responsibly implement a data science pipeline. Most significantly, data science development pipelines need accountability, transparency and auditability. In addition, CIOs need to implement mechanisms for addressing the degradation of a model over time, or &#8220;model drift.&#8221; Having the right teams in place in the data science pipeline is also critical: Data science generalists work best in the early stages, while specialists add value to more mature data science processes.</p>
<section class="section main-article-chapter" data-menu-title="Data science at Moody's">
<h3 class="section-title">Data science at Moody&#8217;s</h3>
<div class="imagecaption alignRight">CIOs might want to take note from Moody&#8217;s, the financial analytics giant, which was an early pioneer in using predictive modeling to assess the risks of bonds and investment portfolios. Jacob Grotta, managing director at Moody&#8217;s Analytics, said the company has streamlined the data science pipeline it uses to create models in order to be able to quickly adapt to changing business and economic conditions.</div>
<p>&#8220;As soon as a new model is built, it is at its peak performance, and over time, they get worse,&#8221; Grotta said. Declining model performance can have significant impacts. For example, in the finance industry, a model that doesn&#8217;t accurately predict mortgage default rates puts a bank in jeopardy.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Watch out for assumptions">
<h3 class="section-title">Watch out for assumptions</h3>
<p>Grotta said it is important to keep in mind that data science models are created by and represent the assumptions of the data scientists behind them. Before the 2008 financial crisis, a firm approached Grotta with a new model for predicting the value of mortgage-backed derivatives, he said. When he asked what would happen if the prices of houses went down, the firm responded that the model predicted the market would be fine. But it didn&#8217;t have any data to support this. Mistakes like these cost the economy almost $14 trillion by some estimates.</p>
<section class="section main-article-chapter" data-menu-title="Watch out for assumptions">The expectation among companies often is that someone understands what the model does and its inherent risks. But these unverified assumptions can create blind spots for even the most accurate models. Grotta said it is a good practice to create lines of defense against these sorts of blind spots.</p>
<p>The first line of defense is to encourage the data modelers to be honest about what they do and don&#8217;t know and to be clear on the questions they are being asked to solve. &#8220;It is not an easy thing for people to do,&#8221; Grotta said.</p>
<p>A second line of defense is verification and validation. Model verification involves checking to see that someone implemented the model correctly, and whether mistakes were made while coding it. Model validation, in contrast, is an independent challenge process to help a person developing a model to identify what assumptions went into the data. Ultimately, Grotta said, the only way to know if the modeler&#8217;s assumptions are accurate or not is to wait for the future.</p>
<p>A third line of defense is an internal audit or governance process. This involves making the results of these models explainable to front-line business managers. Grotta said he was working with a bank recently that protested its bank managers would not use a model if they didn&#8217;t understand what was driving its results. But he said the managers were right to do this. Having a governance process and ensuring information flows up and down the organization is extremely important, Grotta said.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Baking in accountability">
<h3 class="section-title">Baking in accountability</h3>
<p>Models degrade or &#8220;drift&#8221; over time, which is part of the reason organizations need to streamline their model development processes. It can take years to craft a new model. &#8220;By that time, you might have to go back and rebuild it,&#8221; Grotta said. Critical models must be revalidated every year.</p>
<p>To address this challenge, CIOs should think about creating a data science pipeline with an auditable, repeatable and transparent process. This promises to allow organizations to bring the same kind of iterative agility to model development that Agile and DevOps have brought to software development.</p>
<p>Transparent means that upstream and downstream people understand the model drivers. It is repeatable in that someone can repeat the process around creating it. It is auditable in the sense that there is a program in place to think about how to manage the process, take in new information, and get the model through the monitoring process. There are varying levels of this kind of agility today, but Grotta believes it is important for organizations to make it easy to update data science models in order to stay competitive.</p>
</section>
<section class="section main-article-chapter" data-menu-title="How to keep up with model drift">
<h3 class="section-title">How to keep up with model drift</h3>
<p>Nick Elprin, CEO and co-founder of Domino Data Lab, a data science platform vendor, agreed that model drift is a problem that must be addressed head on when building a data science development pipeline. In some cases, the drift might be due to changes in the environment, like changing customer preferences or behavior. In other cases, drift could be caused by more adversarial factors. For example, criminals might adopt new strategies for defeating a new fraud detection model.</p>
<div class="imagecaption alignRight"> </p>
<section class="section main-article-chapter" data-menu-title="How to keep up with model drift">In order to keep up with this drift, CIOs need to include a process for monitoring the effectiveness of their data models over time and establishing thresholds for replacing these models when performance degrades.</p>
<p>With traditional software monitoring, the IT service management needs to track metrics related to CPU, network and memory usage. With data science, CIOs need to capture metrics related to accuracy of model results. &#8220;Software for [data science] production models needs to look at the output they are getting from those models, and if drift has occurred, that should raise an alarm to retrain it,&#8221; Elprin said.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Fashion-forward data science">
<h3 class="section-title">Fashion-forward data science</h3>
<p>At Stitch Fix, a personal shopping service, the company&#8217;s data science pipeline allows it to sell clothes online at full price. Using data science in various ways allows them to find new ways to add value against deep discount giants like Amazon, said Eric Colson, chief algorithms officer at Stitch Fix.</p>
<div class="imagecaption alignRight"> </p>
<section class="section main-article-chapter" data-menu-title="Fashion-forward data science">For example, the data science team has used natural language processing to improve its recommendation engines and buy inventory. Stitch Fix also uses genetic algorithms &#8212; algorithms that are designed to mimic evolution and iteratively select the best results following a set of randomized changes. These are used to streamline the process for designing clothes, coming up with countless iterations: Fashion designers then vet the designs.</p>
<p>This kind of digital innovation, however, was only possible he said because the company created an efficient data science pipeline. He added that it was also critical that the data science team is considered a top-level department at Stitch Fix and reports directly to the CEO.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Specialists or generalists?">
<h3 class="section-title">Specialists or generalists?</h3>
<p>One important consideration for CIOs in constructing the data science development pipeline is whether to recruit data science specialists or generalists. Specialists are good at optimizing one step in a complex data science pipeline. Generalists can execute all the different tasks in a data science pipeline. In the early stages of a data science initiative, generalists can adapt to changes in the workflow more easily, Colson said.</p>
<p>Some of these different tasks include feature engineering, model training, enhance transform and loading (ETL) data, API integration, and application development. It is tempting to staff each of these tasks with specialists to improve individual performance. &#8220;This may be true of assembly lines, but with data science, you don&#8217;t know what you are building, and you need to iterate,&#8221; Colson said. The process of iteration requires fluidity, and if the different roles are staffed with different people, there will be longer wait times when a change is made.</p>
<p>In the beginning at least, companies will benefit more from generalists. But after data science processes are established after a few years, specialists may be more efficient.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Align data science with business">
<h3 class="section-title">Align data science with business</h3>
<p>Today a lot of data science models are built in silos that are disconnected from normal business operations, Domino&#8217;s Elprin said. To make data science effective, it must be integrated into existing business processes. This comes from aligning data science projects with business initiatives. This might involve things like reducing the cost of fraudulent claims or improving customer engagement.</p>
<div class="join-discussion-wrapper">
<div class="slick-list" aria-live="polite">
<div class="slick-track" role="listbox">
<ul>
<li class="slick-slide slick-current slick-active" tabindex="-1" role="option" data-slick-index="0" aria-hidden="false" aria-describedby="slick-slide00">
<aside id="embedded-discussions">
<div class="discussion-question">
<div class="image-resize">What are the advantages of data science development pipelines? What are the cautions?</div>
<p><i class="icon" data-icon="Z"></i></div>
<div class="discussion-cta">Join the Discussion</div>
</aside>
</li>
</ul>
</div>
</div>
</div>
</section>
<p>In less effective organizations, management tends to start with the data the company has collected and wonder what a data science team can do with it. In more effective organizations, data science is driven by business objectives.</p>
<p>&#8220;Getting to digital transformation requires top down buy-in to say this is important,&#8221; Elprin said. &#8220;The most successful organizations find ways to get quick wins to get political capital. Instead of twelve-month projects, quick wins will demonstrate value, and get more concrete engagement.&#8221;</p>
</div>
</section>
</div>
</section>
</section>
<p>The post <a href="https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/">Building a data science pipeline: Benefits, cautions</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/building-a-data-science-pipeline-benefits-cautions/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Microservices: Streamlining Development by Breaking up Monolithic Applications</title>
		<link>https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/</link>
					<comments>https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/#respond</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 07 Mar 2018 05:52:48 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[continuous deployment]]></category>
		<category><![CDATA[Monolithic Applications]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2048</guid>

					<description><![CDATA[<p>Source &#8211; formtek.com Microservices is an architectural style that builds applications from a collection of loosely coupled services. The protocols used are lightweight and the services are <a class="read-more-link" href="https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/">Microservices: Streamlining Development by Breaking up Monolithic Applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; formtek.com</p>
<p>Microservices is an architectural style that builds applications from a collection of loosely coupled services. The protocols used are lightweight and the services are very fine grained.  Each service stands on its own, and as such, makes development, testing, and refactoring of applications easier.  Because each service is independent, microservices enable application development to be more easily split up among developers and teams to allow parallel work.</p>
<p>A recent report on microservices from the results of a RedHat survey taken at the end of 2017 had the following results:</p>
<ul>
<li>Developers are using microservices for both new application design and when interact with legacy systems</li>
<li>Microservice benefits include: Continuous Integration (CI), Continuous Deployment (CD), agility, scalability, higher developer productivity, and easier debugging and maintenance</li>
<li>Microservice challenges include: management, and diagnostics and monitoring</li>
<li>Current microservice developers said that they prefer a best of breed approach that is multi-runtime, multi-technology, and multi-framework.</li>
</ul>
<p>Matt Miller, partner at Sequoia, told Forbes that “if you think about it from a technology point of view, as we have gone from things like the mainframe to client-server to cloud infrastructure to virtualization, each time we have successfully inserted a new layer of abstraction, our quality has gone up meaningfully, the time it takes to develop applications has come down, and so have our costs… That is why we are excited about microservices and think it is so disruptive. For companies looking to adopt it, it is disruptive because it is a huge paradigm step forward in the efficiency that can be gained in building your applications. You can operate the technology aspects of your business, which is more of your business, at a much faster rate than you could before.”</p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/">Microservices: Streamlining Development by Breaking up Monolithic Applications</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microservices-streamlining-development-by-breaking-up-monolithic-applications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microservices and cloud-native development versus traditional development</title>
		<link>https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/</link>
					<comments>https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 27 Sep 2017 07:11:13 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[cloud-native]]></category>
		<category><![CDATA[software development]]></category>
		<category><![CDATA[traditional development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1288</guid>

					<description><![CDATA[<p>Source &#8211; ibm.com We’ve had a very good run for the last 20 years or so with distributed systems development in the enterprise, but time has started to <a class="read-more-link" href="https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/">Microservices and cloud-native development versus traditional development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211;<strong> ibm.com</strong></p>
<p>We’ve had a very good run for the last 20 years or so with distributed systems development in the enterprise, but time has started to show some of the downsides of traditional development styles. First of all, distributed systems have grown enormous – it’s very common to see corporate websites for retailing or customer service with hundreds or thousands of discrete functions. Likewise I’ve seen many Java EAR files at those customers whose size are measured in gigabytes. So when you have a site that large, and one that may have been originally built 15 or more years ago, there are going to be parts of it that need to be updated to today’s business realities.   The second business challenge is that the pace of business change is much more rapid now than it was in the 90s and 2000s. Now that the cellphone replacement cycle is down to a year or less, and customers are constantly updating their apps on those phones—the idea that a corporate website can remain static for months at a time is simply not in touch with the times.</p>
<p>These two trends combined together create a challenge for traditional top-down enterprise development styles. It requires an approach that is both more customer-centric and able to react more quickly, and it also requires an architecture that is able to adapt to and facilitate these rapid changes.</p>
<p>Also, in the past, when development cycles were longer, waterfall-based methods were appropriate, or at least not as much of a hindrance as they are now. If you have the luxury of time, then the downsides of top-down approaches are less apparent. As a side effect, that led to the predominance of outsourcing since if you were going to define everything up front anyway, then you might as well perform the programming work where the labor was cheapest. All of those trends are now being called into question.</p>
<p><strong>What do you see that suggests that of microservices and cloud-native development can change this situation or help address these barriers?</strong></p>
<p>Let’s start with a definition of what microservices are. The microservices approach is a way of decomposing an application into modules with well-defined interfaces that each performs one and only one business function. These modules (or microservices) are independently deployed and operated by a small team who owns the entire lifecycle of the service.   The reason this is important is that this goes back to a very old principle in computer science that was discovered by Fred Brooks – that adding people to a late project only makes it later by increasing the number of communication paths within the team.</p>
<p>Instead, microservices accelerate delivery by minimizing communication and coordination between people while reducing the scope and risk of change.   Why is this important? Because in order to meet the rapidly changing pace of development we have to be able to limit both the scope of what we are doing and increase the speed at which applications can be developed. Microservices help with that.   Another critically important factor that you can’t ignore is the importance of the cloud for deployment. The cloud is rapidly becoming the de-facto standard for deployment of new and modified applications. That has led to the rise of “cloud-native” application development approaches that take advantage of all of the facilities provided by the cloud like elastic scaling, immutable deployments and disposable instances. When you write an application following the microservices architecture, it is automatically cloud-native. That is another factor that is accelerating the adoption of the architectural approach.</p>
<p>Now, as great as microservices are, there are some downsides, or at least some place where they’re not always appropriate. We’ve found that while the microservice approach is perfect for what we at IBM call Systems of Interaction, that it may not be the best approach for Systems of Record, especially those that already exist and change slowly and where there may be no maintenance gains from refactoring or rewriting those systems into microservices.</p>
<p><strong>What does IBM’s provide to help enterprises transform their systems to a micro service and cloud-native approach?  </strong></p>
<p>We bring several things to the table that help our customers to adopt the cloud-native and the microservices approach. First and foremost is our open-standards based cloud platform, IBM Cloud. You can’t underestimate the important of open standards when choosing a cloud platform, and IBM’s embrace of standards such as Cloud Foundry, Docker and Kubernetes makes it possible for you to develop not only our cloud, but for on-premise private clouds and other vendor’s clouds as well, giving you unprecedented portability.</p>
<p>Second, we have the comprehensive IBM Cloud Garage Method. You can only be successful with cloud-native and microservices architectures if you build them within a methodological framework that includes the kind of practices such as small, autonomous, co-located teams, test driven development and continuous integration and continuous delivery that make the approach viable. Finally, we have our people, particularly in the IBM Cloud Garage. The Garage is our secret weapon in helping customers rapidly move to the cloud and microservices by showing them how to apply the method to build systems on Bluemix using all of the latest technologies, practices and approaches. You gain experience with those approaches and technologies at the very same time that you’re building a minimum viable product – the first step toward adopting the approach on all of your systems.</p>
<p><strong>There are still many enterprises that have concerns about cloud migration from either a security or performance point of view. How do you address these concerns?</strong></p>
<p>I’ve heard these concerns many times and it comes down to the fact that neither security nor performance should be a driving factor. It’s possible to build systems on the cloud that are more secure and more performant than current on-premise systems! The key is that you don’t build them in the same way as you do on-premise systems, and it’s that change that is actually what is difficult for teams to understand. For instance, with security, you have to implement security at every layer; you can’t be satisfied with only securing the front-end of your applications thinking that anything behind your firewall is safe by default. The extra attention makes the overall system much more secure. Likewise with performance, in the cloud you have to build systems that are horizontally scalable – that means you have to develop algorithms that work with horizontally scalable systems – which end up not only being better able to scale and thus perform, but makes them more resilient as well.</p>
<p><strong>What advice can you give to enterprises who are outsourcing and waterfall-oriented and want to adopt agile processes and cloud-native development?</strong></p>
<p>The important thing is that you have to begin by changing your mindset. In many countries we’re starting to see a backlash against outsourcing as firms realize that software assets are an important category of intellectual property— a firm should be responsible for maintaining and creating on their own just as they create intellectual capital of other types as part of their core competency.</p>
<p>Software is everywhere now – the IoT and the Cloud now pervade every part of our lives, and any firm that thinks that writing software is outside of what they should be doing will find themselves quickly replaced in the market by more innovative firms that realize that the software is the critical factor – witness the demise of traditional taxicabs in the face of Uber and Lyft.</p>
<p>Once that first shift is made, then the second shift comes more easily. If you realize that building software is critical to your productivity and growth, then you want to build it as quickly as possible and to be able to try new things without having to wait months for a result. That leads directly to the Agile approach and away from a waterfall-based mindset that views software projects as large, multi-year capital expenditures. If you want to fully embrace Agile methods, then you need a technology base that facilitates that, and cloud-native approaches and microservices architectures give you that platform.</p>
<p>The post <a href="https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/">Microservices and cloud-native development versus traditional development</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/microservices-and-cloud-native-development-versus-traditional-development/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Aqua Security CTO reveals how to secure microservices</title>
		<link>https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/</link>
					<comments>https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Wed, 20 Sep 2017 07:24:15 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[Aqua Security]]></category>
		<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[Microservices Security]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=1197</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com he emergence of microservices boosts business agility, enabling rapid application development, deployment and modification. The challenge is baking in microservices security processes. Traditional security processes <a class="read-more-link" href="https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/">Aqua Security CTO reveals how to secure microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>techtarget.com</strong></p>
<p><i>he emergence of microservices boosts business agility, enabling rapid application development, deployment and modification. The challenge is baking in microservices security processes. Traditional security processes can&#8217;t secure microservices, because the latter work in and communicate between both internal and external environments, according to Amir Jerbi, CTO of Aqua Security, a container security platform provider.</i></p>
<p><i>Using </i><i>microservices</i><i> makes security easier for developers or architects in some ways and harder in others. In this Q&amp;A, </i><i>Aqua Security</i><i> Co-founder Jerbi offers advice on avoiding mistakes in setting up microservices security and balancing the often-conflicting needs for steel-trap security and rapid deployment of and communication between microservices.</i></p>
<p><b>What are the security pros and cons of microservices?</b></p>
<p>Amir Jerbi: Before cloud, you deployed your software on premises. You had to use the on-prem mechanism in order to secure your application. You had to use a firewall, host-based intrusion protection and maybe a code analysis tool and/or a tool to test for insecure coding. However, when deploying that app in the cloud, you had to use different tool sets and methodologies and build and deploy on one or separate cloud bases. With the shift to using microservices and containers, you can actually use the same tool set and methodologies to deploy on prem or in the cloud and even on every kind of cloud platform.</p>
<p>Microservices make it easier to develop an app that can run on multiple cloud platforms, because you&#8217;re using the same packaging and the same artifacts, and you can actually secure them exactly the same way. So, microservices can enable greater consistency in the way that you build, deploy and secure software.</p>
<p><b>What are some mistakes that could be made in developing and deploying microservices and building a microservices architecture that could lead to security issues?</b></p>
<p>Jerbi: With the monolithic architecture and application, all of the communications between different parts of apps would be internal. Now, to secure microservices, it&#8217;s more complex than that. You have multiple microservices running either on the same machine or distributed host, and there is a lot of communication between those microservices. Some of the communication can be on the same machine, some between different machines and some between different data centers.</p>
<p>When there are so many communications done between those microservices, it means that the perimeter is not something that is well-defined. You can&#8217;t put everything inside of a box and just protect that box. It&#8217;s not enough just to put a firewall in place, because all those communications must be governed and authenticated.</p>
<p><b>Easier app updating is a benefit of microservices. A developer can just redeploy a single microservice while letting the application run consistently. Doesn&#8217;t that require </b><b>change management</b><b> to be done differently than before in order to secure microservices and apps?</b></p>
<p>Jerbi: Yes. You need to make sure that the change that you&#8217;re adding to your system is controlled, and the software that is added doesn&#8217;t impact overall security of your application. Now, you need to manage so many small pieces, and that will require different mechanisms for authentication.</p>
<p>Microservice systems, many times, are open, allowing communication between the different services without any security control. So, it&#8217;s important to find ways to do strong authentication between microservices.</p>
<p>One way to do microservices authentication well is adopting a well-established authentication framework, like TLS [Transport Layer Security], and implementing two-factor authentication between all of the microservices. But to do that, you need to maintain new methods in order to publish security certificates and maintain those certificates. This is very different from traditional authentication, where they only needed to maintain TLS for the web server, which is much easier.</p>
<p><b>Are there other security perils that come with microservices?</b></p>
<p>Jerbi: There is a friction between the need for diligence in security and the need to make and deploy changes very, very fast because, well, you can and your competitors can, too.</p>
<p>Cloud deployment runs so fast today that Netflix and Google can roll out updates many times a day. Others want to do the same but often proceed without thinking through the security piece. The result can be pushing new changes into production apps without understanding the security impact of the change.</p>
<p>Traditionally, a security process ran slow. It required multiple intervals. It was a process that touched all along the pipeline. Now, people traditionally in charge of testing software and pushing it very fast to production are now also tasked with security &#8212; largely, securing the code. Wisely, many are taking the DevSecOps approach, wherein a group takes charge of security with a standard set of processes, such as two-factor authentication, RASP[runtime application self-protection], threat [modeling, etc.].</p>
<p><b>Which other approaches for securing microservices do you find useful?</b></p>
<p>Jerbi: Shift left testing is more focused on developers, and it allows you to do security analysis of your code or your microservices or your packages. Shift left allows you to fail fast and to fail the build, often to fail due to security issues. It might slow down development processes a bit, but it helps developers understand what&#8217;s needed from them in order to build applications with better security. Over time, the knowledge that comes from doing shift left will increase the level of overall security in the organization and allow faster deployment times. That’s because the code will be secure before deployment, and there won’t be after-deployment bottlenecks.</p>
<p>The post <a href="https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/">Aqua Security CTO reveals how to secure microservices</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/aqua-security-cto-reveals-how-to-secure-microservices/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Why microservices are the foundation to a digital future</title>
		<link>https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/</link>
					<comments>https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 29 Aug 2017 10:34:07 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[digital future]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[DX strategies]]></category>
		<category><![CDATA[IT]]></category>
		<category><![CDATA[software development]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=816</guid>

					<description><![CDATA[<p>Source &#8211; networkworld.com There’s no doubt that digital transformation (DX) is revolutionizing the way we do business, and cloud computing serves as a key cog in the DX <a class="read-more-link" href="https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/">Why microservices are the foundation to a digital future</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; <strong>networkworld.com</strong></p>
<p>There’s no doubt that digital transformation (DX) is revolutionizing the way we do business, and cloud computing serves as a key cog in the DX machine. Cloud’s elasticity can indeed help digital businesses communicate more rapidly and increase innovation. But to extract full value from the cloud, companies must make sure that they aren’t bringing the equivalent of a cutlass to a gun fight when it comes to migrating existing applications and accelerating software development.</p>
<p>Here is what I mean: many businesses start their migration journeys by lifting and shifting existing on-premises applications into the cloud, making few to no changes to the application itself.  But running such the same old monolithic application architectures in the cloud means that your applications aren’t built to maximize cloud benefits. Just the opposite: They often present scalability issues, increase cost and require time-consuming application support. Ultimately, this will erode DX strategies, which depend on modernizing, rapidly iterating, and scaling applications.</p>
<p>To fully maximize the cloud, companies need to change application models to suit this new environment. At the same time, this model must also work with existing virtualized infrastructures, as cloud and on-premises IT infrastructure must co-exist for some years.</p>
<h2>Apps built for DX</h2>
<p>So, what to do? Lift and shift can work as a viable first step, if you know that the application already performs well on premises. From there, companies can lift and extend by then refactoring the application, making significant adjustments to it to make its architecture compatible with a cloud environment. They can also opt for a full redesign, and re-write it as a cloud-native application, a much more work-intensive option reserved for high-value apps that require optimal performance and agility. This is a space where the enterprise takes a much bigger leap ahead than their service operator compatriots, streamlining their own networks of their own accord and liberating themselves from vendor lock in.</p>
<aside class="nativo-promo smartphone"></aside>
<p>How can the enterprise go about this? The answer lies in microservices and containers, two high-growth technologies that are powering DX strategies at companies such as Saks Fifth Avenue and BNY Mellon, according to the Forrester Research report, “Why The CIO Must Care About Containers.”<a href="https://www.networkworld.com/article/3218669/virtualization/why-microservices-are-the-foundation-to-a-digital-future.html#_ftn1"><br />
</a></p>
<p>With a microservice approach to application development, large applications are broken down into small, independently deployable, modular services that each represent a specific business process and communicate with lightweight interfaces such as application programming interfaces (API.)</p>
<p>This approach supports DX activities in several ways. Microservices are easily deployed, scale well, and require less production time, while individual services can be re-used in different projects. Thus, developers can work more quickly and update applications more rapidly. There are a couple drawbacks, however.  Frequently accessed microservices require increased number of API calls which can lead to increased latency and degrade the application response time. Moreover, the need to have multiple microservices operate in concert at any given moment creates a multitude of interdependencies within the application. It is therefore becoming more challenging to monitor the performance of these applications and quickly identify the root-cause of performance degradations.</p>
<p>Containerization is a virtualization method that helps solve some of the latency and efficiency problems of microservices. A container bundles applications together with the pieces they depend on, like files, environment variables, and libraries. Unlike traditional virtual machines, however, containers share the same kernel operating system and without the overhead of the hypervisor processing they allow many more microservices to run on each server, thus significantly boosting application performance.</p>
<aside class="nativo-promo tablet desktop"></aside>
<p>Code-independent service assurance helps address the second requirement for microservices of monitoring the multitude of interdependencies. It allows visibility into the communication and transactions across microservices without the need to instrument the bytecode. This methodology is the equivalent of monitoring wire-data across traditional networks, customized to virtualized and containerized environments. I it is not only application agnostic, but also capable of providing insights at every layer of the service and application stack.</p>
<p>Empowered with this visibility the enterprise will gain greater clarity on their applications and services what is going on across the physical and virtual wires of their infrastructure. In a world where data is currency and application and service assurance are the basis for investment, this method of ensuring visibility and performance is crucial. Add to this ability to detect anomalies that may indicate security breaches, and the resulting solution becomes an integral part of a successful DX and business assurance strategy.</p>
<h2>Agility and other benefits</h2>
<p>While monitoring and assuring microservices performance may be challenging, it is highly advantageous and drives innovation and business agility. Through the creation of microservices and containers, service innovation and alteration can be conducted with ease and with speed. Adopting microservices would allow enterprises to refactor their applications effectively either before migration or after they lift and shift them to the cloud, as well as to develop from scratch applications that are optimized to operate in private and public cloud environments.</p>
<p>Of course, a cultural change that promotes experimenting, adapting and implementing at a quicker rate will need to be implemented. Moving from a fail-safe to a safe-to-fail environment with microservices and containers provide the perfect opportunity for this if robust service assurance is in place. Along with encouraging a culture of innovation, it will allow the enterprise to be far more rapid when it comes to implementing new services and fixing problems.</p>
<p>This microservices-led architecture combined with robust service assurance will be crucial for bringing the full benefits of agile service delivery and cloud elasticity at reduced cost into play and help the enterprise dominate the game.</p>
<p>The post <a href="https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/">Why microservices are the foundation to a digital future</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/why-microservices-are-the-foundation-to-a-digital-future/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
