<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>microservices deployment Archives - Artificial Intelligence</title>
	<atom:link href="https://www.aiuniverse.xyz/tag/microservices-deployment/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aiuniverse.xyz/tag/microservices-deployment/</link>
	<description>Exploring the universe of Intelligence</description>
	<lastBuildDate>Tue, 21 Aug 2018 05:58:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Securing microservice environments in a hostile world</title>
		<link>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/</link>
					<comments>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Tue, 21 Aug 2018 05:58:51 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[application development]]></category>
		<category><![CDATA[cloud-native]]></category>
		<category><![CDATA[Microservice]]></category>
		<category><![CDATA[microservices deployment]]></category>
		<category><![CDATA[security mechanisms]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2768</guid>

					<description><![CDATA[<p>Source &#8211; networkworld.com At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The <a class="read-more-link" href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Securing microservice environments in a hostile world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; networkworld.com</p>
<p>At the present time, there is a remarkable trend for application modularization that splits the large hard-to-change monolith into a focused microservices cloud-native architecture. The monolith keeps much of the state in memory and replicates between the instances, which makes them hard to split and scale. Scaling up can be expensive and scaling out requires replicating the state and the entire application, rather than the parts that need to be replicated.</p>
<p>In comparison to microservices, which provide separation of the logic from the state, the separation enables the application to be broken apart into a number of smaller more manageable units, making them easier to scale. Therefore, a microservices environment consists of multiple services communicating with each other. All the communication between services is initiated and carried out with network calls, and services exposed via application programming interfaces (APIs). Each service comes with its own purpose that serves a unique business value.</p>
<p>Within a microservices deployment, one must assume that the perimeter is breachable.  Traditional security mechanisms only provide a layer of security for a limited number of threats. Such old-fashioned mechanisms are unable to capture the internal bad actors where most compromises occur. Therefore, it is recommended to deploy multiple security layers and employ zero trust as the framework. This way, the new perimeter and decision point will be at the microservice.</p>
<p>In this day and age, we must somehow enforce separation along with consistent policy between the services, while avoiding the perils of traditional tight coupling, and not jeopardize security. We need to find a solution so that the policy is managed centrally. However, at the same time, the policy should be enforced in a distributed fashion to ensure the workloads perform as designed and do not get compromised.</p>
<aside id="" class="nativo-promo nativo-promo-1 smartphone"></aside>
<h2>The cost of agility</h2>
<p>The two main drivers for change are agility and scale. Within a microservices environment, each unit can scale independently, driving massively scalable application architectures. Yet, this type of scale was impossible when it was necessary to couple heavy data services along with the application.</p>
<p>The ability to scale and react rapidly increases business velocity, which allows organizations to reap the benefit in terms of cost and resilience and also improved ways of managing and building the application. However, the decentralized nature of agile deployments introduces challenges in terms of governance.</p>
<p>We should keep in mind that we now have a distributed organization with sub-teams responsible for individual microservices, In addition, the patching and updates are carried out in real time. Eventually, this creates a gap that needs to be filled. The gap consists of visibility and the ability to scale policy in a distributed fashion.</p>
<h2>Complexity is the enemy of security</h2>
<p>The cloud-native approach introduces considerable complexity. Besides complexity, the end user is responsible for securing their own environment. With microservices, there are many more moving pieces and paths of communication, introducing complexity that must be managed. We need to manage this complexity while keeping the holistic view of the application and visibility as to how each component is operating.</p>
<aside id="" class="nativo-promo nativo-promo-1 tablet desktop"></aside>
<p>The attempt to secure a complex deployment using existing tools does not work and leads to a complicated security solution with complex policies. Complexity is the enemy of security. As security solutions become more complex, they become unmanageable and less secure. There is a requirement for a new unified security framework that can adapt to the different microservice environments while still providing full visibility along with simplified policy management.</p>
<p>You really need to know who is talking at any given point, authenticate the source, and authorize the type of transaction the API communication is trying to do. You should be aware of the specific communication, which is going on within these channels and what should be authenticated and authorized to communicate.</p>
<p>This is impossible to do efficiently unless changes are introduced to the microservices architecture. Microservice deployments are susceptible to an array of security threats. API vulnerabilities, logic attacks, lateral movements, and the inadequacy of traditional security tools bring the systems to a halt like a house of cards tumbling down.</p>
<h2>Diverse traffic patterns</h2>
<p>Today’s traffic patterns are different from those of the past. Nowadays, there are a lot of APIs connecting inbound and outbound along with internal communication. The APIs are all public, open and customer facing. The administrators are permitting this type of communication in and out of the public and private data center.</p>
<p>There is typically considerable asymmetry between the front-end ingress API and the backend APIs. Considering the customer environment, there is an initial consumer API call, but that propagates numerous other backend API calls to carry out, for example, user and route lookups.</p>
<p>As the microservice environment develops more components, it is difficult to monitor and make sure everything is secure. The deployment of web application firewalls (WAFs) to secure the public APIs and the use of next-gen firewalls filtering at strategic network points cover only a part of the attack surface. We must still assume that the perimeter is breachable along with the high potential for internal bad actors.</p>
<h2>Traditional security mechanisms fall short</h2>
<p>The network perimeter was born in a different time and traditional security mechanisms based on Internet Protocol (IP) and 5-tuple no longer suffice. The traditional perimeter consists of the virtual or physical appliance such as a firewall, IPS/IDS or API gateway located at strategic network points. Actually, the traditional perimeter with its traditional security mechanisms only provides the first layer of security. Even though it is labeled as defense in depth, still it is far from reaching that status in a microservices environment.</p>
<p>For example, API gateways are meant to manage the inbound calls. APIs are registered with the API gateway, which changes the workflow. They don’t scale in a microservices environment where there could be hundreds of services, each one exposing a number of API’s and each service containing multiple instances.</p>
<p>The API gateway needs to scale not just with the external traffic, but also with east to west internal traffic. This is the order of magnitude that holds the significant share of total traffic. Web application firewalls (WAFs) do not change the workflow but they share some of the challenges of API gateways. It is impossible to create and manage security when policies are not distributed to the workloads. There is a lot of work to be done for just a limited number of APIs that exponentially grow with internal communications. This is clearly not practical in case of microservices deployments.</p>
<p>Next-gen firewalls are typically the central security resource. They are more suitable for north to south traffic flows but not for internal east to west. In this world where everything is HTTP, firewalls do not offer the best visibility and access control. A firewall typically forces security on the source and destination IP and protocols, but in a microservices environment, the regular communication port is 80/443. It is very common for all to use the same port and protocol for communication.</p>
<p>For this to work, the firewall would need to follow the identity of the source and destination IP address along with the source and destination port numbers. It should have the ability to deal with an orchestration system, changing the identity all the time.</p>
<p>Enforcement should be done in a distributed fashion, right down at the workload level. If what you are monitoring and protecting is accessible to the application and application behavior, it matters less where the attacks come from. However, security frameworks based on traditional mechanisms can leave many avenues for bad actors to camouflage their attacks.</p>
<h2>The larger attack surface requires a new perimeter</h2>
<p>If the security cannot follow the microservices, you need to bring the security to the microservice and embed the security with the microservice. An effective perimeter is the decision point at the microservice that is not set at the strategic points located within the network. The new perimeter is at the microservice layer and everywhere that has an API. This is the only way to protect, especially when it comes to logic attacks.</p>
<p>Logic attacks become more prevalent in a microservices environment. This type of threat is carried out by a sophisticated attacker, not a script kiddy using a readily accessible tool. They take their time to penetrate into the perimeter to silently explore the internal environment and to go unnoticed while accessing valuable assets.</p>
<p>Cloud native applications expose their logic in multiple layers, not just one. Each microservice exposes some application logic through an API and these APIs if not efficiently secured, can be manipulated by a bad actor. A practical example would be an API that is meant to give you information about a single entry in a database. If the bad actor is able to modify the query slightly, they can pull multiple entries in the database that they do not have authority to access. Now, this can be exploited in every single API, which presents a much larger attack surface than before. This can bring about the opportunity for advanced persistent threat (APT).</p>
<p>As stated earlier, the distributed architectures give a much larger surface area. Each one of these small components is exposed to threats. The surface area is the sum of all APIs and the interactions both internal and external of the application. If you examine an API that is exposed to the outside, you would see hundreds of API calls. This offers numerous ways to exploit the vulnerabilities of an externally facing API. Within a kill-chain, the API is not just used to gain access but also as a way to perform lateral movements.</p>
<p>We also have challenges with traffic encryption. A large part of security in the new age of east to west traffic is the ability to have everything encrypted. It is the role of the application to perform the encryption.</p>
<p>In a microservices environment, key management is a difficult task. Besides, IPsec has a very coarse granularity. If you are looking for encryption with finer granularity in this environment then you need a new type of solution.</p>
<h2>Solution components: identity</h2>
<p>Workloads can be encapsulated in a number of ways such as a virtual machine (VM), bare metal or container. As a result, what’s required is a mechanism to provide a provable and secure identity to the application, not just to the server or container but also to the actual workload that is running. Ideally, identity can be a list of attributes. Think of them as the key-value pairs that describe an object to the level of detail that you want. Indeed, the more detail you have, the better.</p>
<p>The process of providing identity to a service is called identity bootstrapping. You have to trust something in order to provide application identity i.e. there needs to be an external source of truth. Companies such as AWS, VMware, and Octarine provide this by integrating with the orchestration system.</p>
<p>The orchestration system could be anything from vCenter to AWS ECS to Kubernetes. The system monitors events of new workloads being spawned. After validating the newly spawned workload, the workload is provided with the credentials it needs to prove its identity. After validating that it is legitimate, the newly spawned workload is provided with the credentials it needs to prove its identity. This way, the secrets are never kept in the code, container image, or in kubernetes.</p>
<h2>Solution components: visibility</h2>
<p>Once the identity is taken care of, we must create security based on the secured identity. How do you enforce policy and how does it get represented when you communicate it to something else?</p>
<p>Firstly, you need to rely on the application identity and monitoring traffic at layer 7. This is because on every API call the caller identifies oneself. You can add the identity on the client side and server side, validate the identity and log the API call to a central system.</p>
<p>The central system aggregates all API calls in all deployments for the customer&#8217;s environment and provides extensive visibility. This visibility extends over time to include the history of any changes. Such visibility is useful in agile environments.</p>
<h2>Solution components: anomaly detection</h2>
<p>You must try as much as possible to enforce the policy at the endpoint. However, sometimes in order to detect sophisticated attacks, you have to correlate multiple sequences, such as time of the day, payloads, and geographic access patterns.</p>
<p>Anomaly detection is needed that comprises a component responsible for looking at all the signals at a given time. Further, it should recognize small deviations from the baseline that could not be detected if you are looking at a single endpoint.</p>
<h2>Solution components: policy</h2>
<p>In the past, policy presented two prevailing themes &#8211; ACL distributed policy solutions and segmentation based on VLANs. You really need to start to think about policy being centrally administered but highly scalable and distributed by way of enforcement.</p>
<p>The policy should be based on workload identity and not network identity. With cloud-native, there is no alignment between the identity of a workload and its network identity. You cannot enable security enforcement through pre-defined network policies laid on traditional means.</p>
<p>The policy should also be driven by visibility, enabling feedback about policy and information about violations. This would allow the administrators to update the policy as required.</p>
<p>The promise of cloud-native applications and agile environments has many benefits for business. Cloud-native deployments left to the default lacks proper security tools and methodologies. However, with a guarded approach, you can reach a secure cloud-native agile environment.</p>
<p>The post <a href="https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/">Securing microservice environments in a hostile world</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/securing-microservice-environments-in-a-hostile-world/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
			</item>
		<item>
		<title>How a SaaS provider made microservices deployment safely chaotic</title>
		<link>https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/</link>
					<comments>https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/#comments</comments>
		
		<dc:creator><![CDATA[aiuniverse]]></dc:creator>
		<pubDate>Sat, 12 May 2018 05:29:19 +0000</pubDate>
				<category><![CDATA[Microservices]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[microservices deployment]]></category>
		<category><![CDATA[SaaS]]></category>
		<guid isPermaLink="false">http://www.aiuniverse.xyz/?p=2355</guid>

					<description><![CDATA[<p>Source &#8211; techtarget.com Chaos engineering helps enterprises expect the unexpected and reasonably predict how microservices will perform in production. One education SaaS provider embraced the chaos for its <a class="read-more-link" href="https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/">Read More</a></p>
<p>The post <a href="https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/">How a SaaS provider made microservices deployment safely chaotic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Source &#8211; techtarget.com</p>
<p>Chaos engineering helps enterprises expect the unexpected and reasonably predict how microservices will perform in production. One education SaaS provider embraced the chaos for its microservices deployment, but it didn&#8217;t jump in blindly.</p>
<p>San Francisco-based Remind, which makes a communication tool for educators, school administrators, parents and students, faced a predictability problem with its SaaS offering built on microservices. While traffic is steady during most of the year, back-to-school season is extraordinarily busy, said Peter Hamilton, software engineer at Remind.</p>
<p>&#8220;We [were] the No. 1 app in the Apple App Store for two weeks,&#8221; he said.</p>
<p>Unforeseen microservices dependencies caused performance degradations and volatile traffic patterns in production. Remind determined that a microservices architecture on cloud resources was not enough alone to scale and maintain availability.</p>
<section class="section main-article-chapter" data-menu-title="The trade-off with microservices">
<h3 class="section-title">The trade-off with microservices</h3>
<p>Agility and cloud-native deployments rely on microservices, along with DevOps culture and CI/CDprocesses.</p>
<p>&#8220;Using a microservices architecture is about decoupling the application teams to better achieve the benefit of iteration [from CI/CD] using the agility that cloud provides,&#8221; said Rhett Dillingham, senior analyst at Moor Insights &amp; Strategy. The fewer developer dependencies, the faster projects move. But speed is only half of the picture, as Remind discovered; microservices add deployment complexities.</p>
<section class="section main-article-chapter" data-menu-title="The trade-off with microservices">&#8220;Once you&#8217;re at scale with multiple apps using an array of microservices as dependencies, you&#8217;re into a many-to-many relationship,&#8221; Dillingham said. The payoff is development flexibility and easier scaling than monolithic apps. The downside is significant debugging and tracing complexity, as well as complicated incident response and root cause analysis.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Expect the unexpected">
<h3 class="section-title">Expect the unexpected</h3>
<p>Remind overhauled its microservices deployment approach with chaos engineering across the predeployment staging step, planning and development. Developers use Gremlin, a chaos engineering SaaS tool, for QA and to adjust microservices code before it&#8217;s launched on AWS. Developers from email, SMS, iOS and Android platforms run Gremlin in staging against hypothetical scenarios.</p>
<p>Remind&#8217;s product teams average one major release per month. Hamilton noted that requests take tens of microservices to complete. Remind uses unit and functional tests, user acceptance testing and partial release with tracking, but chaos engineering was the missing piece to simulate attacks and expose the chokepoints in the app design.</p>
<section class="section main-article-chapter" data-menu-title="Expect the unexpected">Remind&#8217;s main focus with chaos engineering is to interfere with network requests, Hamilton said. The path for requests through multiple microservices is hard to determine and plan for without heuristic testing, and Remind&#8217;s microservices deployments ran into cascading issues because any increased latency downstream causes problems, Hamilton said. Database slowdowns overload web servers, requests queue up and the product sends out error messages everywhere.</p>
<p>&#8220;We&#8217;re still learning how chaos affects how you do development,&#8221; he said. Gremlin recommends development teams run the smallest experiment that yields usable information. &#8220;People assume the only way to do chaos engineering is to break things randomly, but it&#8217;s much more effective to do targeted experiments where you test a hypothesis,&#8221; said Kolton Andrus, CEO of Gremlin.</p>
<p>Remind&#8217;s goal is to ensure its product degrades gracefully, which takes a combination of development skills and culture. Designers now think about error states, not just green operations, a mindset that emphasizes a clean user experience even as problems occur, Hamilton said.</p>
<p>Remind explored several options for chaos engineering, including Toxiproxy and Netflix&#8217;s Chaos Monkey. It selected Gremlin because it did not want to build out a chaos engineering setup in-house, and it wanted a tool that fit with its 12-factor app dev model.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Chaos vs. conventional testing">
<h3 class="section-title">Chaos vs. conventional testing</h3>
<p>Chaos is about finding the unexpected, Andrus said, up a level from functional testing, which ensures the expected occurs.</p>
<p>Unit testing of interconnections breaks down once an application is composed of microservices, because so many individual pieces talk to each other, Andrus said. Chaos engineering tests internal and external dependencies.</p>
<p>Chaos engineering is the deployment and operations complement to load tests, which help tune the deployment environment with the best configurations to prevent memory issues, latency and CPU overconsumption, said Henrik Rexed, performance testing advocate at Neotys, a software test and monitoring tool vendor. In particular, load tests help teams tailor the deployment of microservices on a cloud platform to take advantage of elastic, pay-as-you-go infrastructure and cloud&#8217;s performance and cost-saving services.</p>
<p>Remind is particularly aware of the cost dangers from degraded performance and uses chaos engineering to model its resource consumption on AWS in failure modes. &#8220;You don&#8217;t want bad code to impact infrastructure demand,&#8221; Rexed said. And microservices are particularly vulnerable to outrageous cloud bills, because each developer&#8217;s or project team&#8217;s code is just one difficult-to-size piece of a massive puzzle.</p>
<p>Of course, if you really want to test end-user experience on a microservices deployment, you can do it in production. &#8220;As the more advanced development teams running microservices take more operational ownership of the availability of their apps, it is expected that they are proactively surfacing bottlenecks and failure modes,&#8221; Dillingham said. But is it worth risking end users&#8217; experience to test resiliency and high availability?</p>
<p>Some say yes. &#8220;No matter how much testing you do, it&#8217;s going to blow up on you in production,&#8221; said Christian Beedgen, CTO of Sumo Logic, which provides log management and analytics tools. &#8220;If nothing is quite like production, why don&#8217;t we test [there]?&#8221;</p>
<section class="section main-article-chapter" data-menu-title="Chaos vs. conventional testing">Testing can leave teams to look all over for the wrong problem or look for the right problem but simply miss it, Beedgen said. QA and unit tests are necessary but don&#8217;t ensure flawless deployment. The goal is to put code in production with blue/green or canary deployment to limit the blast radius, monitor for deviations from known behavior and roll back as needed.</p>
<p>Remind is not ready to bring chaos into its live microservices deployment. &#8220;We&#8217;re conservative about the things we expose production to, but that&#8217;s the goal,&#8221; Hamilton said. Chaos in production should have limits: &#8220;You don&#8217;t want a developer to come along and trigger a CPU attack,&#8221; he said. A nightmare scenario for the SaaS provider is to huddle up all the senior engineers to troubleshoot a problem that is actually just an unplanned Gremlin attack.</p>
</section>
<section class="section main-article-chapter" data-menu-title="Monitor microservices deployments">
<h3 class="section-title">Monitor microservices deployments</h3>
<p>While Remind prefers to blow up deployments in staging rather than production, it uses the same monitoring tools to analyze attack results. Remind is rolling out Datadog&#8217;s application performance management (APM) across all services. This upgrade to Datadog&#8217;s Pro and Enterprise APM packages includes distributed tracing, which Hamilton said is crucial to determine what&#8217;s broken in a microservices deployment.</p>
<p>Generally, application teams depend much more on tooling to understand complex architectures, Beedgen said. Microservices deployment typically is more ephemeral than monolithic apps, hosted on containers with ever-evolving runtimes, so log and other metrics collection must be aware of the deployment environment. Instead of three defined app tiers, there is a farm of containers and conceptual abstractions, with no notion of where operational data comes from &#8212; the OS, container, cloud provider, runtime or elsewhere &#8212; until the administrator implements a way to annotate it.</p>
<p>Microservices monitoring is also about known relationships, Beedgen said. For example, an alert on one service could indicate that the culprit causing performance degradation is actually upstream or downstream of that service. The &#8220;loosely coupled&#8221; tagline for microservices is usually aspirational, and the mess of dependencies is apparent to anyone once enough troubleshooting is performed, Beedgen said.</p>
<p>Chaos engineering is one way to hone in on these surprising relationships in independent microservices, rather than shy away from them. &#8220;That was one of the original goals of testing: to surprise people,&#8221; said Hans Buwalda, CTO of software test outsourcing provider LogiGear. In an Agile environment, he said, it&#8217;s harder to surprise people and generate all-important unexpected conditions for the application to handle.</p>
</section>
</section>
</section>
</section>
<p>The post <a href="https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/">How a SaaS provider made microservices deployment safely chaotic</a> appeared first on <a href="https://www.aiuniverse.xyz">Artificial Intelligence</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.aiuniverse.xyz/how-a-saas-provider-made-microservices-deployment-safely-chaotic/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
