Recent trends in cloud computing fuel the need for DevOps methods

Source – techtarget.com

Cloud services have transformed IT infrastructure, but the most recent trends in cloud computing signal a more fundamental shift that’s reshaping jobs. Newer cloud services and application design principals — such as microservices, serverless computing and function as a service — have important implications for both IT operations staff and developers.

However, understanding the difference between these services and how they affect application deployment can be confusing, especially since most cloud providers will simply tell you their service is best. Let’s review the characteristics that define each service and how they fit in with DevOps methods.

The rise of microservices

In 2011, the concept of microservice architecture was just beginning. By 2015, every developer was talking about it. Large companies were all in on microservices, touting the benefits of code reusability, mitigation of risk from upgrades and the speed at which teams could deploy new features. Microservices made it easy for developers to work in small teams while still contributing to a large-scale product capable of managing an outage to any single microservice.

There are three key ingredients to a microservice:

  • It is independently scalable and deployable.
  • Each service is responsible for the smallest possible task.
  • Services may work better together, but they will fail gracefully if one dies.

For example, Netflix employs several microservices in its overall product, including one for recommendations on videos to watch next. If that recommendation service goes down, the rest of the streaming platform continues on as if nothing happened.

Microservices helped lead to the launch of Docker, which allowed developers to further segregate their individual components via containerization. Docker helps developers deploy applications more quickly and in multiple parts — without having to worry about underlying hardware or even the OS.

The case for serverless computing

Among other recent trends in cloud computing is serverless, which stands on the premise that developers should not have to worry at all about underlying hardware. Google App Engine made serverless computing available before Amazon Web Services’ (AWS) Lambda made it popular. While Google App Engine was an amazing technology, it was too early for developers to give up control of the underlying hardware, and was not very well deployed.

There are three key elements that make something serverless:

  • There are no idle charges, meaning there is no cost for time that isn’t used.
  • There is no provisioning required. Infrastructure scales automatically.
  • You do not need to manage any OS, hardware or unrelated software.
Some providers may put in safeguards or limits to how much capacity you can use without manually requesting more. The point here is to ensure that — as scaling happens automatically — you don’t end up with an unexpectedly high bill.

Some, but not all, serverless computing environments are also function as a service (FaaS). For example, AWS Lambda and Auth0’s Webtasks are both serverless FaaS. AWS CodeBuild and Google App Engine are serverless, but not FaaS.

Go on demand with function as a service

Amazon introduced AWS Lambda in 2015. Lambda thrust users into serverless computing, and it also introduced the concept of function as a service. AWS Lambda is both serverless — no managed provisioning, idle charges or hardware to manage — and a FaaS.

There are three key factors that define FaaS:

  • It executes code on demand (no idle executions).
  • It scales automatically.
  • It runs one specific function without worrying about OS, hardware, etc.

With FaaS, users are able to run on-demand code blocks that are lightweight, as well as easily created and torn down. Functions running in this environment need to have minimal runtime — typically less than five minutes — and are often best suited for applications that respond directly to user interactions. For example, a developer could write code for a FaaS that serves up a dynamic website or checks a user’s permissions to a given API. FaaS is often used as middleware to apply business logic rules for user interactions with a database. It is also commonly used for webhooks or other event-based triggers.

FaaS does not imply serverless. For example, Docker Functions requires you to run servers (or VMs) running Docker; but it allows you to quickly trigger a single container with a bit of code. FaaS simply means the code is executed only in response to an event. It does not require that the underlying infrastructure remain idle while waiting for a customer’s code.

Where platform as a service fits

Platform as a service (PaaS) is an older concept. While similar to FaaS in that it does not require any manual provisioning, PaaS often involves some idle runtimes and isn’t truly considered microservices. PaaS providers include Google App Engine and Heroku. These providers typically offer a framework, such as Express.js, or a custom Python framework, such as Google App Engine, and automatically scale the infrastructure — adding servers — based on application need.

There are two key conditions that define PaaS:

  • It’s a single, end-to-end platform that builds an entire application.
  • It requires no provisioning, no hardware, no OS and no other software.

Many developers have switched away from PaaS offerings in favor of FaaS, as the latter offers a higher level of abstraction without as much vendor lock-in. Google’s Firebase, however, is a PaaS that’s becoming more popular. Google Firebase started off as a simple database as a service, but has since morphed into a broader platform offering lots of connected parts. Firebase is unique in that it’s a fully fledged PaaS that provides FaaS as one of its offerings.

The many uses of software as a service

The highest level of abstraction, away from any user-management requirements, is software as a service (SaaS). Database as a service is a type of SaaS that offers services that enable developers to build better applications without managing databases.

In order to be SaaS, the software needs to meet the following criteria:

  • run without any installation on your part;
  • require no coding to get started;
  • be accessible from anywhere that has internet connectivity; and
  • automatically scale to your needs.

There are many types of SaaS, including products such as Salesforce and Gmail. Developers and IT operations professionals use SaaS-based tools for application performance monitoring, databases and security.

It’s important to note that developers need more than SaaS to create an application, and SaaS offerings cannot be joined to make a new application. Anything that requires coding to connect things together is not SaaS. Typical serverless applications will use something like a FaaS to connect to multiple SaaS offerings to prevent having to run any servers at all, such as running FaaS on AWS Lambda and connecting to Amazon DynamoDB — which is a database SaaS.

The DevOps connection

All of the as-a-service offerings can be considered cloud services. Any service that doesn’t require local hardware or even software installations is a cloud offering. Cloud services generally remove a lot of requirements for operations workers, who can focus on cloud service management rather than hardware.

As cloud computing becomes more popular, organizations require operational IT staff to switch from just managing hardware to learning development in order to manage virtualized hardware. This is where the term DevOps started. Operations staff needed to learn additional skills to survive in the new cloud age. In some organizations, operations teams transformed into DevOps teams, which require operations professionals to learn some coding in order to keep up and stay relevant.

Recent trends in cloud computing, such as microservices, serverless computing and FaaS, however, have introduced a new fundamental shift. Now that we’re moving more applications to serverless platforms, it’s the developers’ turn to learn a little bit more about operations. We can’t rely on operations staff to manage cloud resources if the cloud providers automatically handle everything for us. This doesn’t mean that serverless functions will immediately scale indefinitely. It’s not like with traditional architecture where operations teams can just add additional virtual instances or request more capacity. It’s important to know where bottlenecks exist. Developers need to be aware of limitations before they develop services designed for web-scale architecture.

For example, DynamoDB is a service that developers can provision to nearly infinite scale. However, practical scale is limited by the hash key. Hash keys are hard to change after an application is built, so developers need to understand these limitations before starting development. Otherwise, they end up rewriting code later.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence