2.2 Summarize virtualization and cloud computing concepts

  • Cloud Models
    • Infrastracture as a Service (IaaS)
    • Platform as a Service (PaaS)
    • Software as a Service (SaaS)
    • Anything as a Service (XaaS)
    • Public
    • Community
    • Private
    • Hybrid
  • Cloud Service Providers
  • Managed Service Provider (MSP) / Managed Security Service Provider (MSSP)
  • On-Premises vs Off-Premises
  • Fog Computing
  • Edge Computing
  • Thin Client
  • Containers
  • Microservices/API
  • Infrastructure as Code
    • Software Defined Network (SDN)
    • Software Defined Visibility (SDV)
  • Serverless Architecture
  • Services Integration
  • Resource Policies
  • Transit Gateway
  • Virtualization
    • Virtual Machine (VM) Sprawl Avoidance
    • VM Escape Protection


Types of Service

What is the cloud?  It’s a concept where we outsource our computing resources to a third party and then connect to it remotely.  Somebody else manages our infrastructure and we don’t have to worry about it (in theory).  How do we pay for the cloud?  How do we receive services?  There are a few general models.

SaaS or Software as a Service is a concept where we pay for the right to use a software application.  Somebody else takes care of writing the software, hosting the software, and backing up the data.  Our only responsibility is to use the software.  The software might be entirely web based or include components that are installed on our computers/phones.  We don’t have to worry about the physical hardware.  Examples of SaaS include Salesforce, Microsoft Exchange Online, and Office 365.  SaaS is typically billed on a per user per month basis.

IaaS or Infrastructure as a Service is a concept where we pay for the right to use different hardware components.  For example, we can rent different server types from Amazon Web Services’ EC2, or we could rent DNS services from Route 53.  IaaS is usually charged on a per device per hour (or per month) basis.  For example, a server might cost $0.35 per hour.  If I buy an EC2 server it may come with a license for an operating system, such as Windows Server 2019 (for a higher hourly rate), or I could buy it “bare metal” and install my own operating system. 

BYOL (Bring Your Own License) is a concept where we can transfer our operating system licenses to the cloud infrastructure.  This avoids us having to pay a monthly rental cost for operating system licenses that we already own.  A Windows Server license could cost upwards of thousands of dollars per server.

The cloud allows us to mix and match different hardware components so that we can build the type of infrastructure that we require, but we usually have to choose from the hardware combinations that the cloud service provider has.  IaaS allows us to pay for only what we use.  If we use a server for five hours, we pay for five hours (unless the service provider has minimum charges).

Increasingly, vendors of proprietary network equipment such as Cisco and Bomgar sell virtual images of their equipment that can be loaded into the cloud.  Thus, you can build an entire virtual LAN in the cloud complete with servers, routers, firewalls, and load balancers, and pay for what you use, without having to touch any physical infrastructure.

PaaS or Platform as a Service is a hybrid between the SaaS and IaaS.  In PaaS, we don’t have to worry about the hardware.  We simply upload the applications we want, and the cloud provisions the necessary hardware to run them.  We are still responsible for configuring the applications and backing up their data.  An example is Amazon Hadoop.  PaaS is typically billed on a per hour per resource basis.  For example, we could be billed for each GB of data we store each month, or we could be billed for processing capacity we use.  We can reduce our costs by using or writing more efficient applications.

DaaS or Desktop as a Service is a new offering where an office’s computing infrastructure is stored in the cloud.  It is also known as Desktop Virtualization.  You can think of it like having a monitor, mouse, and keyboard at your desk but no computer.  Your actual computer is in the cloud.  In reality, you have a computer or thin client (a computer with no operating system).  But your files, software applications, and desktop are located on a cloud server.  You remotely connect to the cloud server via RDP, Citrix, or another type of application.

The main benefits of DaaS

  • Centralized hardware – our computing infrastructure is stored in the cloud.  One server might host desktops for twenty to fifty people.  This reduces the need for computer resources. 

  • Standardized hardware – since each user needs only a thin client or basic computer, monitor, mouse and keyboard, we can use cheaper standardized hardware.  We don’t need to stock multiple types of devices or spend money on expensive desktops.

  • Security – the devices that users use to connect to the remote desktop won’t store any data so we don’t have to worry about data leaks if they getting lost or stolen.

  • Flexibility – you can log in to your desktop from multiple computers and not lose any data or program sessions.  You will resume work exactly where you left off.

  • Disaster recovery – we can move our workforce to another location and quickly have them back to work.  Users can also work from home.  The computing infrastructure is centralized, and cloud technology allows us to replicate it to multiple zones.

The main disadvantages

  • Standardized hardware – users are forced to use a standard type of computer hardware

  • Internet access – users are unable to use the DaaS when on the road or when access to the internet is interrupted.  When the latency is too high (poor internet connection quality) the user experience will be affected.

The cloud is defined by three concepts

  • Multitenancy – multiple users and customers have access to the same physical infrastructure or software.  When you rent a server from AWS, you are renting a portion the physical infrastructure in the AWS data center.  You might be renting a virtual server that is hosted on a physical server, and that physical server might have several other virtual servers belonging to other clients.

  • Scalability – we can increase the workload without affecting performance of the application.  We can do this by ensuring that we have enough capacity in our hardware (virtual hardware) for our application to grow.  Scalability relates to the software layer in that the software layer can grow without issues either using the existing hardware or with additional hardware.  A scalable system guarantees that the software will continue to function at a peak load (there is enough hardware available).

  • Elasticity – elasticity ensures that we fit the amount of resources to the demand posed by the software.  Elasticity grows or shrinks the underlying hardware in response to demand from the application. 

Elasticity is more cost effective than scalability because we only pay for what we use, whereas scalability requires us to pay for the maximum amount of resources that we will eventually require.  Elasticity can be difficult to implement if we are not able to predict the amount of resources that will be required or if we are unable to add resources in real time. 

For example, if our application requires 100 servers during regular operation and 200 servers at peak capacity, we can always ensure scalability by having 200 servers.  If we need 101 servers, then we already have capacity.  If we need 102 servers, then we have capacity.  As the needs of the application grows, the underlying hardware is already available to keep it operating.  The problem is that we are paying for 200 servers even when we are only using 100 of them.

With elasticity, we keep 100 servers until we need 101 servers.  Then we add another server.  When we need 102 servers, we add another server, and so on.  The problem is that the time between when we realize that we need 101 servers and the time that we buy another server is time that the application will perform poorly. 

The best way to implement elasticity is to maintain a buffer zone.  We should think about how rapidly the needs of the application will change and build a buffer zone based on that time.  For example, if we have a buffer of two servers, then when we need 100 servers, we rent 102 servers.  When we need 101 servers, we add another server and now we have 103 servers, and so on.  When the application demand drops, and we only need 100 servers, we shut down one server, and have only 102 servers.

Cloud Delivery Models

There are different cloud models.

A public cloud is available to the public.  The hardware resources inside a public cloud are shared amongst all customers, which improves efficiency and reduces cost.  Multiple customers may be provided access to the same physical server without realizing it (cloud software should prevent data leaks)

A private cloud is built by one organization for its internal use.  A large organization can use a private cloud to share resources amongst different departments.  For example, a large city can merge the computing resources of its engineering, fire, police, and road repair departments.  Instead of having each department purchase and maintain its own hardware, all the departments pool their resources, resulting in reduced costs.  Each department can rent a portion of the cloud and be charged accordingly.

A hybrid cloud is a mix of a public cloud and a private cloud.  A company may decide that some applications are too sensitive to host on a public cloud, or that some applications will not run properly when they are off site but would like to take advantage of the public cloud.  Applications/infrastructure that can run on the public cloud are placed there, and remaining applications/infrastructure are placed on a private cloud.

Infrastructure as Code (IaC) is a concept where we can deploy servers and other infrastructure through software code instead of manually setting them up.  The cloud computing provider physically installs hardware including large servers and storage appliances.  Then they make available virtual “instances” of server types.  We can then write code to deploy the specific instances that we need. 

IaC has some advantages

  • We can deploy infrastructure quickly and automatically

  • We can deploy infrastructure in a standardized manner – that means that there is less room for human error

  • If we are building an application (such as a website) that must scale up and down frequently, we can write code to deploy more infrastructure when we need it and shut them down.  This allows us to pay for only the infrastructure that we need at the time that we need it.


Connectivity Methods / Relationship Between Local and Cloud Resources

How do we connect to the cloud?  Devices in the cloud might have their own public IP addresses.  An internet connection is the easiest way.  We could connect to a cloud server via Remote Desktop Protocol, or we could connect to a database via SSH.  Other types of applications may have web-based interfaces.

What if my cloud resources are vital to the organization or what if I need to move large amounts of data?  I could create a direct connection between the cloud and the local network via a WAN or VPN.  The cloud service provider would need to set up a WAN or VPN connector on their own network so that the two networks can communicate.  With a WAN or VPN, devices in the cloud behave like they are on the local network.  This is the best approach for a corporate cloud.

If you need to move lots of data into the cloud, you can physically ship your storage appliances (or hard drives) to the cloud where they can be copied.  AWS (Amazon Web Services) offers a semi-trailer called the Snowmobile that is full of storage appliances.  They drive it to your office.  You connect it to your network and fill it with data.  Then AWS takes the semi-trailer back to their data center and unloads the data into your account.  The Snowmobile can store up to 100PB of data.

The top cloud providers at the time that I wrote this

  • Amazon Web Services – AWS has by far the most products and services available out of any cloud provider.  It also has the most users and the most data centers.

  • Microsoft Azure – Microsoft Azure is a close second to AWS but is catching up.  Microsoft Azure integrates well with other Microsoft products such as Azure Active Directory.

The next four cloud providers have some unique features but don’t even come close to the first two.

  • Google Cloud – Google Cloud has some unique APIs and artificial intelligence features built on top of existing Google products

  • Oracle Cloud – provides you with access to Oracle databases

  • IBM Cloud – has access to some different IBM applications including Watson

  • Alibaba Cloud – not very popular but I predict that it will gain users in the coming years

When you choose a cloud service provider, think about

  • Their commitment to security.  How will they keep your data safe?  How will they keep the physical hardware safe?

  • The types of services that they offer.  Do they offer all the services that you require?

  • Does the service provider have the necessary certifications to store your data?  For example, healthcare data may require a HIPPA certification.

  • Does the service provider have data centers in all the regions where you need service?

  • What is the service going to cost?

There are three options for where we can put our “cloud”

  • On Premise – We can build a data center in our office.  It can be a separate room or separate building.  A good data center has multiple internet connections to manage incoming and outgoing connections, battery back up for power, and redundant power supplies.  It may also have security.  Before we build a data center we must consider

    • Whether we have enough equipment to justify the cost of the construction

    • The cost of cooling the data center. 

    • The cost of powering the data center

    • Whether we have dedicated staff to operate the data center

    • Whether we have adequate internet connections to support the data center

    • Whether the function of the infrastructure and the data is too sensitive to outsource to a third party

  • Colocation – If we can’t justify the cost of an on-premise data center, we might outsource it to a colocation.  A colocation is where another organization builds a data center and rents out portions of it to other customers.  The colocation may charge a flat rate per square foot or per rack unit.  The colocation may provide internet connectivity or may require us to provide the connectivity.  We are responsible for supplying, installing, and maintaining all the equipment at the colocation.

  • Cloud – The cloud is where we outsource our infrastructure to a third party.  We don’t have to worry about the infrastructure, internet, electricity, or physical devices.

Transit Gateway

When we have infrastructure in the cloud and on premise, we connect them through a device known as a transit gateway.  We can also connect virtual clouds in different regions together through the transit gateway.

On the cloud side, we simply need to configure some settings in our account.

On the local side, we configure VPN or SD-WAN settings inside our router.  We also add some routes so that our router knows to send traffic to the cloud.

Managed Service Provider (MSP) / Managed Security Service Provider (MSSP)

A Managed Service Provider is a third party that manages your IT infrastructure.  Some areas that an MSP can help you with

  • Network infrastructure installation, monitoring, and configuration

  • Server hardware

  • Server software and licensing

  • Phone system installation and configuration

  • Help desk

  • Data back ups

  • Cloud computing

  • Cybersecurity

When the goals of the internal IT department stop aligning with the goals of the organization, you might be tempted to outsource some or all your IT functions to a third party (or to multiple third parties).  An MSP can be helpful when your organization is small and cannot afford to hire a subject matter expert in each area. 

For example, your organization can only afford to have one IT person, who must do everything (help desk, new hardware installation, network management, cyber security, data backups, telephone, etc.), whereas the MSP may have dozens of experts in each field.  Obviously, they won’t all be working for you full time, but you are able to leverage their expertise.  An MSP that only provides security services might be called a Managed Security Service Provider or MSSP.

When negotiating a contract with an MSP

  • Define a clear scope of work that covers all your needs.  The difficulty with outsourcing is that a good in-house IT person is willing to fix any problem that comes up.  An MSP won’t do something that isn’t in the scope of work.  If you run into an issue that the MSP won’t fix, then you will have to find a second vendor.

  • Define a clear Service Level Agreement with penalties for violating it.  The Service Level Agreement ensures that the MSP will respond to and resolve critical incidents within a short period or pay a severe financial penalty.  The response time depends on the needs of your organization and the severity of the incident.

  • The MSP should have a good understanding of your business and your industry.

  • You should have oversight and visibility into everything that the MSP does.  For example, if the MSP manages your network, you should continue to maintain full admin rights to the network devices.  You should also continue to maintain up-to-date configurations and logical diagrams of all network devices. 

  • If you must fire the MSP, you should be able to immediately take back control of your entire infrastructure.  Your infrastructure and processes should be so well documented that a new in-house IT person or MSP can quickly take over.

The best model is when there is an in-house IT person who is supplemented by the expertise of an MSP.  The MSP can handle

  • Cyber security

  • WAN and network management (including purchasing new internet connections)

  • Cloud services

  • Purchasing new hardware (the MSP may be able to get a better rate than you)

The local IT is there to provide oversight, report issues to the MSP, and fix local issues that the MSP can’t.

Fog Computing & Edge Computing

We talked about the cloud and how it is centralized.  If your office is in Atlanta, but your data center is in New York, then data passing from your computer to the server must travel a long distance.  The time it takes to get between the two locations is called latency.  It can be detrimental to the user experience, and it can cause issues with some applications.

It is more efficient to build a server that is really good at one task than a server that is okay at many.  Generally, servers are built with one of two configurations: servers that can do a lot of thinking, and servers that can store a lot of data.  The latter might be called storage appliances

When we design our infrastructure, and when we are thinking about where we want to put our storage and servers, we need to think about how our data will flow through the system.

Consider a large retail store like Home Depot.  Every night after the store closes, each store uploads its sales data to the cloud.  The servers process all the data and generate reports, which management looks at the next morning.  It makes sense to have all the processing power in the central location and not in the store.

Now consider an engineering firm.  The engineering firm creates drawings for a building, and then renders them in three-dimensions.  They must upload each drawing to the central server, have the central server render it, and then download the completed file.  It might make more sense to keep the processing power in the office.

A more extreme example.  Say you have a smart thermostat.  The thermostat measures the room temperature and sends it to a cloud server.  The cloud server decides whether to raise or lower the temperature and sends a signal back to the thermostat.  The thermostat uses the information to raise or lower the temperature.  It makes more sense to have the decision-making technology inside the local thermostat.  Connecting the thermostat to the cloud is still necessary so that the user can monitor and control it.

This brings us to a new concept called Edge Computing.  It applies more to Internet of Things devices than to standard computers.  The idea is that since these devices are gathering a substantial amount of raw data, instead of uploading it to the cloud, each local device can process it locally and only upload the results.

Edge computing reduces the latency between the input and the output.  It also reduces the load on the main internet connection.

When the data is processed on a LAN device that is not the actual sensor (for example, we might install a gateway in our LAN that connects to each of our smart thermostats), we might call this fog Computing, also known as Fogging or Fog Networking,

The military is taking fog computing one step further and developing a Disruption Tolerant Mesh Network.  The idea is that each device can talk directly to devices nearby without having to go through a central server.  Since there is no central location, even if part of the network is damaged, the rest of it can continue operating.

For more information on Fog Computing, consult NIST Special Publication 500-325.

Containers

I mentioned DaaS or Desktop as a Service, where you can store your entire desktop in the cloud and connect to it remotely.  We are going to take this idea one step further.  Let’s say that instead of us giving each user access to an entire remote desktop, we give them access to just one remote application.

We can virtualize just the application.  We can make it so that a user can open Microsoft Word from their start menu for example, and Microsoft Word opens like normal, but in reality, that application is running on a cloud server.  And we can do that for every application that the user has.  So, each application is running on a remote server, but it appears to be running on the local machine. 

What’s the point?  Why bother?  Why not install the applications on each user machine?  Well, if the applications are central, it is easy to update them.  We don’t need to give users powerful computers.  And we can store the data somewhere safe.

To accomplish this, we create a container for each application.  A container is a user space that contains the application, external libraries, and other configuration files – basically, everything that the application needs to run.

Multiple containers can run on the same machine and operating system.  So, we don’t need to create a separate virtual machine for each user.

The most popular containerization application is called Docker.  Docker can take an application and package it into a container that can run on Windows, Linux, macOS with very little overhead.  Docker was free until August 2021.

Microservices/API

We can take the containerization one step further.  Let’s say that our application is large, with many functions, and each user only needs a few of them.  We can containerize each function separately.  This is known as microservices.  For example, if we have an accounting program, it could have the following functions: accounts payable, invoicing, accounts receivable, payroll, report generation, and inventory.  We don’t need to containerize the whole application if each user need access only to a few functions.

To successfully implement microservices, we need support from the software development team.  Microservices are helpful because each function in the application can be updated separately and can scale separately.  If one function becomes more popular, we can scale its hardware efficiently.  We don’t have to worry about scaling the entire application.  And we can write each function in a different language.

We might talk to a microservices application through an API.

Serverless Architecture

So, this whole containerization idea brings us back to the cloud delivery model.  If we look at AWS for example, we can see an offering called Lambda.  What you can do with Lambda is take your code or your container and upload it to the “cloud”.  The code just runs, and you are billed for the resources that it uses.  You don’t have to worry about setting up servers or storage.  The “cloud” scales your infrastructure up and down as required.  We can call this serverless architecture

Now don’t get it wrong – there are still servers.  The physical hardware will never go away.  It’s just that we don’t have to think about it anymore.  We write the code and the software layer figures out how to deliver it.

We can integrate Lambda with many other services.  An action taken by a user can trigger a piece of code on Lambda to execute automatically.

When there are many services, having to log in to multiple control panels to manage each of them is bad.  We can integrate most cloud services with a tool called a Service Integration and Management or SIAM.  You will see many application providers boast about controlling the application through a “single pane of glass”.  This is what they mean.  You can log in to one place and see everything.

Software Defined Network (SDN)

Within our local office or within our cloud, we can implement an SDN or Software Defined Network, which is a concept that allows a network to become virtualized. 

In a software-defined network, we don’t have to worry as much about the physical infrastructure.  In a traditional network, each network device must be programmed separately, and each network device makes independent decisions about how to forward traffic.  In an SDN, control of the network is separate from the physical infrastructure.

We create a set of rules that the software then implements across the entire network.

We can think of the SDN as a set of layers

  • Application Layer – the application layer contains the rules that manage the network and forward traffic.  We create rules in the application layer.

  • Control Layer – the control layer connects the application layer to the infrastructure layer.  The connection between the controller and the application is called the Northbound interface.  The connection between the controller and the infrastructure layer is called the Southbound interface.

    The controller takes information from the application layer and translates it into the actual commands that the infrastructure layer will use to forward traffic.

  • Infrastructure Layer – the infrastructure layer contains the physical devices that are connected.  These devices forward traffic based on information given to them by the control layer.  The network’s actual capacity is limited to what the infrastructure layer can provide. 

  • Management Plane – the management plane contains the configuration information for the network.  It is separate from the plane that contains the data being forwarded.

  • Data Plane – the data plane contains the data that the network is forwarding.

Traffic moving up from the infrastructure layer to the application layer is considered moving “north” while traffic moving from the application layer down to the infrastructure layer is moving “south”.  Traffic moving between devices is considered moving East-West (i.e. from server to server).


Configuration of network devices and routing/traffic decisions happen on a software layer instead of a hardware layer. 

Software Defined Networking is supposed to operate regardless of the device manufacturer.  SDN allows an organization to purchase and configure devices regardless of the vendor.  The SDN software may translate the virtual (software-based) configuration into hardware configuration for each device.

In reality

  • All networks require physical components

  • All physical devices such as servers, must physically connect to network devices

  • Dedicated network hardware is required (a computer server cannot operate as a switch)

  • The software defined network only operates as well as the underlying hardware.  An administrator could create a 10Gbps switch port in the software, but if the physical network switch supports only 10/100 Mbps, then that will be the maximum speed.

The configurations created on the software layer must be passed to the hardware layer to be executed.

When we create a cloud environment, we don’t have to worry about where each server is physically located.  We simply deploy the servers we need, configure them, set IP addresses and subnets, and create firewall rules.  The cloud software takes care of the rest. 

Software Defined Visibility (SDV)

Let’s keep in mind that in a traditional network, we can put a firewall at the end and filter all the traffic.  Once we’ve built this Software Defined Network, we need a way to monitor it, even though much of the traffic is virtualized or encapsulated inside containers.

This is where Software Defined Visibility or SDV comes in.  Thus, we need special software to run within the Software Defined Network to capture and monitor the traffic.  Essentially, this is a virtual next generation firewall or monitoring tool or SIEM.

Virtualization

Last, we are going to look at virtualization.  A cloud service provider builds out their infrastructure using large servers and storage appliances.  They run an application on each server that splits it into multiple virtual servers, which they rent to each customer. 

In our organization, we can do the same thing.  Instead of having many little physical servers, we can buy one big server and split it into many virtual servers.  Each virtual server can do the same job as a small physical server, but we can reduce our hardware by up to 90%.  We do this through the use of a hypervisor.

A hypervisor allows a user to run multiple virtual servers on a single physical server.  The hypervisor allocates hardware resources to each virtual server.  There are two types of hypervisors

  • Type I – runs directly on the system hardware.  This is known as a bare metal hypervisor.  Examples include Microsoft Hyper-V and VMware vSphere.  We start off with a server that has no operating system.  We first install the hypervisor, and then we create the virtual servers that we want, and then we install the desired operating system on each one.

  • Type II – runs on top of a host operating system.  Examples include Oracle VirtualBox.

VM Sprawl Avoidance

Since it is easy to create new virtual machines without physically installing any hardware, we could end up with a problem.  Virtual Machine Sprawl is a concept where too many virtual machines are created, and nobody is keeping track of them.  An organization should keep track of each virtual machine, and its purpose.  It should verify – on a regular basis – that each virtual machine is still in use and terminate those that aren’t.

VM Escape Protection

The other problem that virtual machines give us is that a malicious virtual machine could try to escape from its container and enter another virtual machine.  In theory, an application or operating system running inside a virtual machine thinks that it is running inside some real hardware.  It doesn’t know anything about the virtualization.

Every hypervisor application is prone to software bugs that could allow a malicious person to escape.  It is much worse in the cloud because one customer could try and jump into the files of another customer.  Cloud service providers are constantly monitoring and patching their systems to mitigate against this type of attack, and the risk is low. If it is a risk that you cannot tolerate, then you must use dedicated hardware.