What is the Cloud & Cloud Computing
Cloud computing is widespread nowadays. Why would any business buy bare metal servers in-house or rent them in dedicated data centers, and have to pay for additional equipment and staff (ventilation, cooling, dehumidification systems, on-site system administrators) — if you can rent some cloud virtual machines and pay only for the resources consumed? Using the cloud helps save money, time and effort — but what is the cloud, anyway?
Generally speaking, the cloud is the way of organizing the computing resources to ensure maximum scalability through virtualization. Where dedicated data centers operate in the terms of virtual servers and racks of bare metal servers, cloud data centers use clusters of virtualized instances, spanning across multiple data centers in various geographical locations.
Why is this better? Let’s take a closer look at how a traditional server-based business operates.
Building on-premise and dedicated infrastructure
A company buys several servers and creates a server room on-premises. Even if you need just one server for testing, one for storing code, one for hosting production environment — you will also need to store backups somewhere, run a replica server and a load balancer.
Thus said, you will easily need to buy up to 10 servers for an on-prem infrastructure and maintain them (meaning you will have to also pay for various server equipment and professional maintenance services). What is most inconvenient, one cannot predict the workload of the server, so most of the time some of the server resources remain unused. The cost-efficiency of this approach is very low, as these servers will work 8-10 hours a day and will be idling in the night, during weekends, holidays and vacations — but you will have to pay for them all the same.
The variant most of the businesses used to go was renting these servers from dedicated hosting providers who, in their turn, rented these from dedicated data centers. This helped greatly cut on the maintenance costs, yet the performance issues remained — the remote servers still used electricity while running idle most of the time, and shutting them down for the weekend was clearly not an option. In addition, these servers still were not running at full capacity most of the time.
Therefore, the following issues limit the efficiency of bare metal server architecture:
- to ensure stable operations, you must have a replica of every server, essentially doubling the CAPEX
- you can never load the bare metal servers to full, so some amount of resources is always idling, bloating your OPEX with no real benefits for your business
- even some of the resources you use can be idle 12-16 hours a day, on weekends, holidays and vacations, etc.
- it takes quite long to provision new servers and configure them, be it in-house or remotely
- remote servers have good latency for their neighboring location and can lag for some areas across the globe, based on the uplink.
These limitations have led to the introduction of various virtualization systems and the introduction of virtual machines. Thus said, a bare metal server resources can be sliced into multiple smaller parts and many virtual servers can run atop a single piece of hardware. This has dramatically increased the resource-efficiency of hosting, yet the volume of resources required to run the virtualization systems themselves was still quite high.
Cloud computing introduction
However, the virtualization of resources helped change the pattern of thinking and remove the limitations of allocating the resources within the boundaries of the physical capacities of the servers. It also helped solve another important problem — overspending on unused resources. The reason for this is that the customers don’t rent the cloud servers 24/7, they rent certain amounts of resources for some periods of time.
The cloud computing provider purchases the hardware equipment (server boxes, racks, commutators, etc.) en masse and fills their data centers with resources for rent. The customers purchase cloud instances — not the actual hardware, but the guarantee that certain quotas of computing resources (disk space, I/O, bandwidth, CPU time, RAM power) will be available to them upon request.
None of the cloud customers actually owns any computing resources — they rent them for some time, and when these resources are not needed — they are allocated to another customer. This is the basis of high-availability and scalability — every customer uses the resources only for the time needed and pays only for the resources actually spent. This is called Pay-As-Yo-Go or PAYG billing model.
Such an approach helped solve all the issues of the dedicated server infrastructure:
- you buy exactly the number of resources you need
- you pay only for the resources you actually use
- if you need more resources — they are provisioned instantly
- when the cloud computing resources are not used, they are returned back to the cloud provider’s pool and are used by other customers
- cloud virtualization architecture allows combining the resources of the whole datacenters and geographical regions into so-called Availability Zones
- your products or services can be easily transferred between AZ’s to ensure minimal latency to for the majority of your customers at any point in time, or can span across several AZ’s, should the need be
Thus said, cloud computing infrastructure is much more cost-efficient, can be scaled up and down quickly, is quickly configured and equally well accessible across the globe. These benefits are the reason the majority of startups, small-to-medium enterprises and global corporations are already using cloud computing services or are in the process of transition to the cloud.
Types of the cloud
A business can utilize one or several of the cloud computing models:
- Public cloud. The most popular cloud computing approach, provided by cloud vendors like Amazon Web Services, Google Cloud Platform, Microsoft Azure, DigitalOcean, etc. It works exactly as described above, with each customer getting a little piece of the common pie under the same privileges. Most of the apps and websites you use are hosted in a public cloud.
- Private cloud. This can either be a segment of a public cloud provisioned to a single customer or an on-prem cloud solution like OpenShift or OpenStack. It is provisioned for a single customer and cannot be accessible by anyone from the outside. This type of security is highly valued by the banks, financial organizations, scientific or governmental organizations.
- Hybrid cloud. This is a combination of the previous two models, where the sensitive data is securely stored on-prem or in a private cloud, while the customer-facing systems are deployed to the public cloud, like CloudFlare or CloudFront CDN, for maximum scalability and security.
- Multi-cloud. This is a powerful tool for skilled DevOps engineers, an approach of building modular systems with interchangeable components, so that Google cloud services can be relatively quickly replaced with Amazon Web Services infrastructure, etc. This can be very useful for building complex infrastructures for workload-intensive projects but requires top-notch configuration to be efficient.
Thus said, regardless of the cloud computing model you choose, there also is a question of the cloud layer to consider.
Cloud computing layers
There are three commonly depicted layers of cloud computing:
- Infrastructure-as-a-Service or IaaS. This layer covers the use of virtual machines, operating systems, libraries, networking, DNS, load balancers, etc. This is the layer where the DevOps engineers from the cloud platform, IT service provider or your IT department work when they provision, configure and manage the infrastructure. This level grants the most capabilities but requires the most expertise to run properly.
- Platform-as-a-Service or PaaS. This layer is used by developers and system administrators who just use the platform services to deliver new code or manage the existing infrastructure without actually looking under the hood. Google App Engine, AWS Lamba serverless computing, AWS CI/CD Pipeline are examples of PaaS. The customers can configure these services through the dashboard and have limited control over rebooting them but must operate within the preset parameters and limits.
- Software-as-a-Service or SaaS. This layer of the cloud computing covers pretty much all software and web apps currently in use, like Facebook, Instagram, WhatsApp, Viber, Snapchat — all of these are delivered under the SaaS model. The customer has minimal control over the software configuration, available through in-app settings only.
Some people think that this is a progression from “most server management” to “least server management” and they are partially right. However, the pinnacle, in this case, should be serverless computing — a service from AWS, GCP or MS Azure, where the customers never get to manage the servers at all. They just upload the code to the AWS Lambda or Google Functions and configure when it should be run — the cloud vendor does the rest. This is useful for functions that run irregularly, for short periods of time and/or require large resources, which makes renting standard cloud instances inappropriate. However, the users must understand the specifics of the serverless functions they are going to use and they must be able to configure them correctly. Thus said, AWS Lambda and its analogs are actually PaaS tools.
DevOps services for managed cloud computing
We have mentioned several times already that quite a deep DevOps expertise is required to provision, configure and manage the cloud computing cost-effectively. Where to find such a skill set? A business might go for one of three main approaches:
- Find and hire a DevOps talent in-house. This approach seems the best for many businesses, but it actually isn’t. Hiring such a specialist has all the risks of any other recruitment process, and there are no highly skilled DevOps engineers on the market (see the reason below). You can try to use the cloud yourself based on the extensive knowledgebase every platform offers, but mastering it without previous experience will take too much of your valuable time.
- Hire specialists from the cloud service provider. Every cloud platform has a huge staff of talented DevOps specialists that can help you build and run any infrastructure you need. However, these specialists know their vendor-specific features and tools best and will try to use them wherever possible, which can result in vendor lock-in. The only way to avoid it is by designing the cloud infrastructure yourself, which is, once again, hard if you don’t have the expertise at hand.
- Hiring a team from a Managed Service Provider. There are IT outsourcing companies that provide managed DevOps services. This is the most cost-effective approach to cloud management, as these companies house top-notch DevOps talents, who are drawn in by the rich variety of projects and the ability to learn the latest technology and best practices quickly.
Thus said, such teams provide affordable IT services, have lots of ready solutions for typical issues and can begin working on your project from day one. The only question is how to find a reliable Managed Service Provider? Obviously, the team you want to work with should be experienced, have positive reviews from their previous customers, be leaders of international business ratings, etc.
Most importantly, they must be willing and able to provide the services you seek, on time and under budget. Should you find such a company — you will be able to use cloud computing to the fullest extent. Good luck with this endeavor!