Cloud computing is the **on-demand availability of computer system resources**, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.
Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.
Clouds may be limited to a single organization (enterprise clouds), or be available to many organizations (public cloud).
Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform (fat client, thin client, mobile device), back end platforms (servers, storage), a cloud based delivery, and a network (Internet, Intranet, Intercloud). Combined, these components make up cloud computing architecture.
=== Virtualization
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, storage devices, and computer network resources.
=== Containerization
Containerization has become a major trend in software development as an alternative or companion to virtualization. It involves encapsulating or packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure. The technology is quickly maturing, resulting in measurable benefits for developers and operations teams as well as overall software infrastructure.
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.
Managing the lifecycle of containers with orchestration also supports DevOps teams who integrate it into CI/CD workflows. Along with application programming interfaces (APIs) and DevOps teams, containerized microservices are the foundation for cloud-native applications.
Container orchestration used for:
- Provisioning and deployment
- Configuration and scheduling
- Resource allocation
- Container availability
- Scaling or removing containers based on balancing workloads across your infrastructure
- Load balancing and traffic routing
- Monitoring container health
- Configuring applications based on the container in which they will run
When comparing Docker Swarm vs Kubernetes, it becomes apparent that the origins of both platforms have played key roles in shaping their features and communities today.
Docker, realizing the strength of its container technology, decided to build a platform that made it simple for Docker users to begin orchestrating their container workloads across multiple nodes. However, their desire to preserve this tight coupling can be said to have limited the extensibility of the platform.
Kubernetes, on the other hand, took key concepts taken from Google Borg, and, from a high level perspective, decided to make containerization fit into the former platform’s existing workload orchestration model. This resulted in Kubernetes emphasis on reliability, sometimes at the cost of simplicity and performance.
** Docker is the container technology that allows you to *containerize your applications.*
** Docker is *the core of using other technologies.*
* Docker Compose
** Docker Compose allows configuring and starting *multiple Docker containers.*
** Docker Compose is mostly used as a helper when you want to start multiple Docker containers and doesn't want to start each one separately using docker run ....
** Docker Compose is used for *starting containers on the SAME host.*
** Docker Compose is used instead of all optional parameters when building and running a single docker container.
* Docker Swarm
** Docker swarm is for *running* and connecting containers on *multiple hosts.*
** Docker swarm is a *container cluster management and orchestration tool.*
*** It manages containers running on multiple hosts and does things like scaling, starting a new container when one crashes, networking containers ...
** Docker swarm is docker in *production.*
** It is the *native docker orchestration tool* that is embedded in the Docker Engine.
** The docker swarm file named stack file is very similar to a Docker compose file.
* Kubernetes
** Kubernetes is a *container orchestration tool* developed by Google.
** Kubernete's goal is *very similar* to that for Docker swarm.
In Docker, everything is based on Images. An image is a combination of a file system and parameters.
==== Dockerfile
A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own dockerfiles.
.Dockerfile
[source,sh]
----
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y && \
apt-get -y install gcc && \
rm -rf /var/lib/apt/lists/*
----
==== docker build
.docker build
[source,sh]
----
docker build -t ImageName:TagName dir
Options
-t − is to mention a tag to the image
ImageName − This is the name you want to give to your image.
TagName − This is the tag you want to give to your image.
Dir − The directory where the Docker File is present.
----
.docker build example
[source,sh]
----
docker build –t myimage:0.1 .
----
==== Displaying Docker Images
To see the list of Docker images on the system, you can issue the following command.
.docker images
[source,sh]
----
docker images
----
This command is used to display all the images currently installed on the system.
**Output:**
- TAG − This is used to logically tag images.
- Image ID − This is used to uniquely identify the image.
- Created − The number of days since the image was created.
Docker Hub is a registry service on the cloud that allows you to download Docker images that are built by other communities. You can also upload your own Docker built images to Docker hub.
containerID − This is the ID of the container for which you need to see the logs.
----
=== Volumes
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
.docker volumes
[source,sh]
----
docker run -d --name mycontainer -v /var/www/html:/var/html nginx:latest
----
=== repositories
You might have the need to have your own private repositories. You may not want to host the repositories on Docker Hub. For this, there is a repository container itself from Docker. Let’s see how we can download and use the container for registry.
docker run –d –p 5000:5000 –-name registry registry:2
The following points need to be noted about the above command:
Registry is the container managed by Docker which can be used to host private repositories.
The port number exposed by the container is 5000. Hence with the –p command, we are mapping the same port number to the 5000 port number on our localhost.
We are just tagging the registry container as “2”, to differentiate it on the Docker host.
The –d option is used to run the container in detached mode. This is so that the container can run in the background