You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

460 lines
14 KiB

= Cloud !
Apostolos rootApostolos@swarmlab.io
// Metadata:
:description: IoT Εισαγωγή στο Cloud
:keywords: Cloud, swarm
:data-uri:
:toc: right
:toc-title: Πίνακας περιεχομένων
:toclevels: 4
:source-highlighter: highlight
:icons: font
:sectnums:
{empty} +
== Cloud - Intro
Cloud computing is the **on-demand availability of computer system resources**, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.
Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.
Clouds may be limited to a single organization (enterprise clouds), or be available to many organizations (public cloud).
Cloud computing relies on sharing of resources to achieve coherence and economies of scale.
image:./Cloud_computing.svg.png[alt="Cloud_computing"] +
From Wikipedia, the free encyclopedia +
https://en.wikipedia.org/wiki/Cloud_computing[^]
=== Cloud Computing Tutorial for Beginners
* Cloud Computing Tutorial for Beginners
+
video::RWgW-CgdIk0[youtube]
== Cloud computing architecture
Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform (fat client, thin client, mobile device), back end platforms (servers, storage), a cloud based delivery, and a network (Internet, Intranet, Intercloud). Combined, these components make up cloud computing architecture.
=== Virtualization
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, storage devices, and computer network resources.
=== Containerization
Containerization has become a major trend in software development as an alternative or companion to virtualization. It involves encapsulating or packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure. The technology is quickly maturing, resulting in measurable benefits for developers and operations teams as well as overall software infrastructure.
=== Virtual Machines vs Docker Containers
- A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it.
Docker is the service to run multiple containers on a machine (node) which can be on a vitual machine or on a physical machine.
- A virtual machine is an entire operating system (which normally is not lightweight).
* Virtual Machines vs Docker Containers
+
video::TvnZTi_gaNc[youtube]
==== Docker Containers
image:./container-what-is-container.png[alt="Container",align="center",width=550,height=550]
==== Virtual Machines
image:./container-vm-whatcontainer_2.png[alt="VirtualMachine",align="center",width=550,height=550]
== Orchestration
Container orchestration automates the deployment, management, scaling, and networking of containers. Enterprises that need to deploy and manage hundreds or thousands of Linux® containers and hosts can benefit from container orchestration.
Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it. And microservices in containers make it easier to orchestrate services, including storage, networking, and security.
Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment. They make it possible to run multiple parts of an app independently in microservices, on the same hardware, with much greater control over individual pieces and life cycles.
Managing the lifecycle of containers with orchestration also supports DevOps teams who integrate it into CI/CD workflows. Along with application programming interfaces (APIs) and DevOps teams, containerized microservices are the foundation for cloud-native applications.
Container orchestration used for:
- Provisioning and deployment
- Configuration and scheduling
- Resource allocation
- Container availability
- Scaling or removing containers based on balancing workloads across your infrastructure
- Load balancing and traffic routing
- Monitoring container health
- Configuring applications based on the container in which they will run
- Keeping interactions between containers secure
=== Kubernetes vs Docker Swarm
* Kubernetes vs Docker Swarm
+
video::FmrAGliHvzQ[youtube]
=== Technical Comparisons
image:./1-2-683x1024.png[alt="dockerSwarmVsKubernetes"] +
image:./2-1-683x1024.png[alt="dockerSwarmVsKubernetes"] +
=== Conclusion
When comparing Docker Swarm vs Kubernetes, it becomes apparent that the origins of both platforms have played key roles in shaping their features and communities today.
Docker, realizing the strength of its container technology, decided to build a platform that made it simple for Docker users to begin orchestrating their container workloads across multiple nodes. However, their desire to preserve this tight coupling can be said to have limited the extensibility of the platform.
Kubernetes, on the other hand, took key concepts taken from Google Borg, and, from a high level perspective, decided to make containerization fit into the former platform’s existing workload orchestration model. This resulted in Kubernetes emphasis on reliability, sometimes at the cost of simplicity and performance.
=== Popularity of searches for each platform
image:./Screen-Shot-2018-01-09-at-5.59.05-PM-700x410.png[alt="dockerSwarmVsKubernetes"] +
Origin:
https://www.nirmata.com/2018/01/15/orchestration-platforms-in-the-ring-kubernetes-vs-docker-swarm
=== Short_answer
.Info
[NOTE]
====
* Docker:!footnote:disclaimer[https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes[origin info]]
** Docker is the container technology that allows you to *containerize your applications.*
** Docker is *the core of using other technologies.*
* Docker Compose
** Docker Compose allows configuring and starting *multiple Docker containers.*
** Docker Compose is mostly used as a helper when you want to start multiple Docker containers and doesn't want to start each one separately using docker run ....
** Docker Compose is used for *starting containers on the SAME host.*
** Docker Compose is used instead of all optional parameters when building and running a single docker container.
* Docker Swarm
** Docker swarm is for *running* and connecting containers on *multiple hosts.*
** Docker swarm is a *container cluster management and orchestration tool.*
*** It manages containers running on multiple hosts and does things like scaling, starting a new container when one crashes, networking containers ...
** Docker swarm is docker in *production.*
** It is the *native docker orchestration tool* that is embedded in the Docker Engine.
** The docker swarm file named stack file is very similar to a Docker compose file.
* Kubernetes
** Kubernetes is a *container orchestration tool* developed by Google.
** Kubernete's goal is *very similar* to that for Docker swarm.
====
.Update
[NOTE]
====
Docker support https://github.com/docker/compose-on-kubernetes[docker stack deploy --orchestrator=kubernetes] https://docs.docker.com/engine/reference/commandline/stack_deploy/#options[options]
====
== Docker
=== Images
In Docker, everything is based on Images. An image is a combination of a file system and parameters.
==== Dockerfile
A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own dockerfiles.
.Dockerfile
[source,sh]
----
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y && \
apt-get -y install gcc && \
rm -rf /var/lib/apt/lists/*
----
==== docker build
.docker build
[source,sh]
----
docker build -t ImageName:TagName dir
Options
-t − is to mention a tag to the image
ImageName − This is the name you want to give to your image.
TagName − This is the tag you want to give to your image.
Dir − The directory where the Docker File is present.
----
.docker build example
[source,sh]
----
docker build –t myimage:0.1 .
----
==== Displaying Docker Images
To see the list of Docker images on the system, you can issue the following command.
.docker images
[source,sh]
----
docker images
----
This command is used to display all the images currently installed on the system.
**Output:**
- TAG − This is used to logically tag images.
- Image ID − This is used to uniquely identify the image.
- Created − The number of days since the image was created.
- Virtual Size − The size of the image.
==== Removing Docker Images
The Docker images on the system can be removed via the docker rmi command.
.docker images
[source,sh]
----
docker rmi
This command is used to remove Docker images.
Syntax
docker rmi ImageID
----
==== Docker Hub
Docker Hub is a registry service on the cloud that allows you to download Docker images that are built by other communities. You can also upload your own Docker built images to Docker hub.
To run apache, you need to run the following command:
.run docker image from Docker Hub
[source,sh]
----
docker run -p 8080:80 apache
Note the following points about the above command −
Here, apache is the name of the image we want to download from Docker hub and install on our Ubuntu machine.
-p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly.
----
=== Containers
Containers are instances of Docker images that can be run using the Docker run command. The basic purpose of Docker is to run containers.
==== Running a Container
Running of containers is managed with the Docker run command. To run a container in an interactive mode, first launch the Docker container.
.run docker image
[source,sh]
----
docker run –it myimage /bin/bash
----
==== Listing of Containers
One can list all of the containers on the machine via the docker ps command. This command is used to return the currently running containers.
.run docker image
[source,sh]
----
docker ps
----
==== Display the running processes of a container
With this command, you can see the top processes within a container.
Syntax
.docker top
[source,sh]
----
docker top ContainerID
Options
ContainerID − This is the Container ID for which you want to see the top processes.
----
==== Stop a running container
This command is used to stop a running container.
.docker stop
[source,sh]
----
docker stop ContainerID
Options
ContainerID − This is the Container ID which needs to be stopped.
----
==== Attach a running container
This command is used to attach to a running container.
.docker
[source,sh]
----
docker attach ContainerID
Options
ContainerID − This is the Container ID to which you need to attach.
----
==== Delete container
This command is used to delete a container.
.docker rm
[source,sh]
----
docker rm ContainerID
Options
ContainerID − This is the Container ID which needs to be removed.
----
==== Container Logging
Logging is also available at the container level.
.docker log
[source,sh]
----
Docker logs containerID
Parameters
containerID − This is the ID of the container for which you need to see the logs.
----
=== Volumes
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
.docker volumes
[source,sh]
----
docker run -d --name mycontainer -v /var/www/html:/var/html nginx:latest
----
=== repositories
You might have the need to have your own private repositories. You may not want to host the repositories on Docker Hub. For this, there is a repository container itself from Docker. Let’s see how we can download and use the container for registry.
==== Create
.docker registry
[source,sh]
----
docker run –d –p 5000:5000 –-name registry registry:2
The following points need to be noted about the above command:
Registry is the container managed by Docker which can be used to host private repositories.
The port number exposed by the container is 5000. Hence with the –p command, we are mapping the same port number to the 5000 port number on our localhost.
We are just tagging the registry container as “2”, to differentiate it on the Docker host.
The –d option is used to run the container in detached mode. This is so that the container can run in the background
----
==== Push
use the Docker push command to push the image to our private repository.
.docker registry
[source,sh]
----
docker push localhost:5000/myimage
----
==== Pull
use the following Docker pull command to pull image from our private repository.
.docker registry
[source,sh]
----
docker pull localhost:5000/myimage
----
:hardbreaks:
{empty} +
{empty} +
{empty}
:!hardbreaks:
'''
.Reminder
[NOTE]
====
:hardbreaks:
Caminante, no hay camino,
se hace camino al andar.
Wanderer, there is no path,
the path is made by walking.
*Antonio Machado* Campos de Castilla
====