root
4 years ago
1 changed files with 51 additions and 4 deletions
@ -1,8 +1,55 @@ |
|||
### Useful command-line information for virtual lab usage |
|||
### Relevant Info |
|||
|
|||
Command for checking the docker node status: ```docker node ls``` |
|||
For information regarding the tools used for this virtual lab with useful relevant links please read [README.md](ttps://git.swarmlab.io:3000/cs171027/galera-swarm-lxc-ansible/src/branch/master/README.md) |
|||
|
|||
For information regarding the initial installation of all the necessary components please read [INSTALL.md](https://git.swarmlab.io:3000/cs171027/galera-swarm-lxc-ansible/src/branch/master/INSTALL.md) |
|||
|
|||
### General command-line information for virtual lab usage |
|||
|
|||
Command for checking the docker node status by default from inside the manager node (which is the same as the host machine): ```docker node ls``` |
|||
|
|||
Command for using and attaching to LXC container (e.g. to 'worker1'): ```lxc-attach --name worker1```. |
|||
|
|||
One possible limitation of this process, is that static ip assignments [are missing in docker |
|||
swarm](https://forums.docker.com/t/docker-swarm-1-13-static-ips-for-containers/28060/13). |
|||
One possible limitation of the deployment process with docker swarm, is that static ip assignments [are missing in docker |
|||
swarm](https://forums.docker.com/t/docker-swarm-1-13-static-ips-for-containers/28060/13), and so the deployment process is based on the overlay networks, the inclusion of which inside the service deployment adds a degree of environmental complexity for the Galera cluster setup, especially since we use a [custom stack deployment](https://git.swarmlab.io:3000/cs171027/galera-swarm-lxc-ansible/src/branch/master/stack.yaml) for the ```mariadb-galera-swarm``` image. |
|||
|
|||
For checking the service status this command is suitable with the output from below: |
|||
|
|||
```docker service ls``` |
|||
|
|||
```ID NAME MODE REPLICAS IMAGE PORTS |
|||
zwueb8s59c0j stack_anode replicated 1/1 colinmollenhour/mariadb-galera-swarm:latest |
|||
cl7qlf0hmntp stack_bnode replicated 1/1 colinmollenhour/mariadb-galera-swarm:latest |
|||
``` |
|||
|
|||
For checking the status of the Linux Containers see the following command: ```lxc-ls --fancy``` |
|||
|
|||
``` |
|||
NAME STATE AUTOSTART GROUPS IPV4 IPV6 |
|||
worker1 RUNNING 0 - 10.0.3.100, 172.17.0.1, 172.18.0.1 - |
|||
worker2 RUNNING 0 - 10.0.3.101, 172.17.0.1, 172.18.0.1 - |
|||
``` |
|||
|
|||
### Galera cluster debugging |
|||
|
|||
For checking the current size of the Galera cluster - meaning all the current nodes connected to the cluster - there exist some useful commands, which should be executed inside the docker container cluster nodes (```docker exec -it {container_id} sh```), e.g inside the first MariaDB server node: |
|||
|
|||
``` |
|||
mysql -u root -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size%';" |
|||
``` |
|||
|
|||
In case the result of this command is 1, it means that only one MariaDB server is running inside the cluster, which is the server from which the cluster is bootstraping from (see also inside the file ```/var/lib/mysql/grastate.dat``` if this line exists: ```safe_to_bootstrap: 1``` to confirm that this is the case). |
|||
|
|||
* **Manual Healthcheck (inside LXC containers)**: ```docker exec {container_id} healthcheck.sh``` |
|||
|
|||
|
|||
Also, a new database, named e.g "database" and defined inside the "stack.yaml" deployment file, should initially appear inside the Galera cluster on all the MariaDB nodes: |
|||
|
|||
``` |
|||
MariaDB [(none)]> show databases; |
|||
+--------------------+ |
|||
| Database | |
|||
+--------------------+ |
|||
| database | |
|||
|
|||
``` |
Loading…
Reference in new issue