MathJax

MathJax-2

MathJax-3

Google Code Prettify

置頂入手筆記

EnterproseDB Quickstart — 快速入門筆記

由於考慮採用 EnterpriseDB 或是直接用 PostgreSQL 的人,通常需要一些入手的資料。這邊紀錄便提供相關快速上手的簡單筆記 ~ 這篇筆記以 資料庫安裝完畢後的快速使用 為目標,基本紀錄登入使用的範例:

2019年9月6日 星期五

在 LXD 3 裡面設置 Docker 19.03.2 的 Swarm 叢集~

日前在嘗試用 LXC/LXD 模擬 Docker Swarm 節點,遇到設置上無法生效的狀況。。。因此之前的筆記只有基本的 Docker 功能
不過,現在捲土重來了~原來是之前遺漏了幾篇參考教學才會做不出來~~
主要的解法,在於 LXD 支援的 linux.kernel_modules 參數。現在就補上這個紀錄,以後確定也可以在 LXD 裡面練習 Docker Swarm 了~~

[Update 2020/04/15] LXD 上面運作 Minikube 測試環境的筆記在這~這麼一來,Nested Container 測試環境都齊全了~~

為了完整起見,這邊準備大家比較熟悉的原裝版 Ubuntu,不是用特別版來作試驗~~
前面內容會把完整流程紀錄下來。後半部再針對排難疑解作紀錄。

為了證明環境有夠乾淨~這邊用 Google 雲端 VM 提供的 Ubuntu。文章紀錄之時,最新的 Ubuntu 是 19 年版。

開始之前,我們先安裝測試 Port 是否有開通的工具,待會要測試用
thatsme@swarm-in-lxd:~$ sudo apt-get install -y nmap telnet

然後就是快速安裝 LXD。這邊的測試環境,跟之前筆記一樣,不使用特殊的 File System,而是直接以普通目錄放置 Container。
thatsme@swarm-in-lxd:~$ sudo apt update
Get:1 http://us-central1.gce.archive.ubuntu.com/ubuntu disco InRelease [257 kB]
Get:2 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates InRelease [97.5 kB]   
Get:3 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-backports InRelease [88.8 kB] 
Get:4 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/main amd64 Packages [995 kB]  
Get:5 http://archive.canonical.com/ubuntu disco InRelease [10.9 kB]                        
Get:6 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/main Translation-en [509 kB]  
Get:7 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/restricted amd64 Packages [14.0 kB]
Get:8 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/restricted Translation-en [4960 B]
Get:9 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/universe amd64 Packages [9065 kB]
Get:10 http://security.ubuntu.com/ubuntu disco-security InRelease [97.5 kB]
Get:11 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/universe Translation-en [5251 kB]
Get:12 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/multiverse amd64 Packages [157 kB]
Get:13 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/multiverse Translation-en [112 kB]
Get:14 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/main amd64 Packages [260 kB]
Get:15 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/main Translation-en [99.6 kB]
Get:16 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/restricted amd64 Packages [3816 B]
Get:17 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/restricted Translation-en [928 B]
Get:18 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/universe amd64 Packages [301 kB]
Get:19 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/universe Translation-en [100 kB]
Get:20 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/multiverse amd64 Packages [1172 B]
Get:21 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-updates/multiverse Translation-en [632 B]
Get:22 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-backports/main amd64 Packages [1220 B]
Get:23 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-backports/main Translation-en [684 B]
Get:24 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-backports/universe amd64 Packages [3420 B]
Get:25 http://us-central1.gce.archive.ubuntu.com/ubuntu disco-backports/universe Translation-en [1532 B]
Get:26 http://archive.canonical.com/ubuntu disco/partner amd64 Packages [1616 B]
Get:27 http://archive.canonical.com/ubuntu disco/partner Translation-en [712 B]
Get:28 http://security.ubuntu.com/ubuntu disco-security/main amd64 Packages [197 kB]
Get:29 http://security.ubuntu.com/ubuntu disco-security/main Translation-en [71.5 kB]
Get:30 http://security.ubuntu.com/ubuntu disco-security/restricted amd64 Packages [3424 B]
Get:31 http://security.ubuntu.com/ubuntu disco-security/restricted Translation-en [888 B]
Get:32 http://security.ubuntu.com/ubuntu disco-security/universe amd64 Packages [248 kB]
Get:33 http://security.ubuntu.com/ubuntu disco-security/universe Translation-en [70.2 kB]
Get:34 http://security.ubuntu.com/ubuntu disco-security/multiverse amd64 Packages [1172 B]
Get:35 http://security.ubuntu.com/ubuntu disco-security/multiverse Translation-en [632 B]
Fetched 18.0 MB in 4s (4022 kB/s)                      
Reading package lists... Done
Building dependency tree       
Reading state information... Done
7 packages can be upgraded. Run 'apt list --upgradable' to see them.
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ sudo apt-get install -y lxd
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  lxd-installer
The following NEW packages will be installed:
  lxd
0 upgraded, 1 newly installed, 1 to remove and 7 not upgraded.
Need to get 9108 B of archives.
After this operation, 56.3 kB of additional disk space will be used.
Get:1 http://us-central1.gce.archive.ubuntu.com/ubuntu disco/main amd64 lxd all 1:0.7 [9108 B]
Fetched 9108 B in 0s (51.3 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 44548 files and directories currently installed.)
Removing lxd-installer (1) ...
Selecting previously unselected package lxd.
(Reading database ... 44539 files and directories currently installed.)
Preparing to unpack .../archives/lxd_1%3a0.7_all.deb ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
=> Installing the LXD snap
==> Checking connectivity with the snap store
Configuring lxd
---------------

The LXD project puts out monthly feature releases which while backward compatible at an API
and CLI level, will contain some behavior change and potentially require manual
intervention during an upgrade.

In addition to those, every 2 years a LTS release is made which comes with 5 years of
support through frequent bugfix-only releases.

The LXD team recommends you pick "3.0" for production environments and use "latest" if
you're interested in getting the latest LXD features.

  1. latest  2. 3.0
LXD snap track 2

==> Installing the LXD snap from the 3.0 track for ubuntu-19.04
Warning: /snap/bin was not found in your $PATH. If you've not restarted your session since
         you installed snapd, try doing that. Please see https://forum.snapcraft.io/t/9469
         for more details.

lxd (3.0/stable) 3.0.4 from Canonical✓ installed
=> Snap installation complete
==> Cleaning up leftovers
Failed to stop lxd.socket: Unit lxd.socket not loaded.
Failed to stop lxd.service: Unit lxd.service not loaded.
Failed to stop lxd-containers.service: Unit lxd-containers.service not loaded.
Failed to disable unit: Unit file lxd.socket does not exist.

Unpacking lxd (1:0.7) ...
Setting up lxd (1:0.7) ...
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
thatsme@swarm-in-lxd:~$ 

接著就能夠增加測試環境了~很快吧~~
這邊準備一個 Docker Swarm Manager 跟兩個 Docker Swarm Worker。這邊不囉唆,直接建立 CentOS 7 環境
thatsme@swarm-in-lxd:~$ lxc launch images:centos/7/amd64 swarm1
To start your first container, try: lxc launch ubuntu:18.04

Creating swarm1
Starting swarm1                             
thatsme@swarm-in-lxd:~$ lxc launch images:centos/7/amd64 swarm2
Creating swarm2
Starting swarm2
thatsme@swarm-in-lxd:~$ lxc launch images:centos/7/amd64 swarm3
Creating swarm3
Starting swarm3
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc list
+--------+---------+---------------------+-----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+--------+---------+---------------------+-----------------------------------------------+------------+-----------+
| swarm1 | RUNNING | 10.55.87.247 (eth0) | fd42:aa42:2723:a7ba:216:3eff:fe95:2bd (eth0)  | PERSISTENT | 0         |
+--------+---------+---------------------+-----------------------------------------------+------------+-----------+
| swarm2 | RUNNING | 10.55.87.205 (eth0) | fd42:aa42:2723:a7ba:216:3eff:feaa:17e8 (eth0) | PERSISTENT | 0         |
+--------+---------+---------------------+-----------------------------------------------+------------+-----------+
| swarm3 | RUNNING | 10.55.87.197 (eth0) | fd42:aa42:2723:a7ba:216:3eff:fead:376b (eth0) | PERSISTENT | 0         |
+--------+---------+---------------------+-----------------------------------------------+------------+-----------+
thatsme@swarm-in-lxd:~$ 

接著就是重要的 Container 設定了:這步是重點!!
這邊指令做的事情,主要有

  • 設定成 Privileged Container:預設是 Unprivileged Container
  • 允許 Nesting Container 功能
  • 將一些 Kernel Module 載入 Container 環境內

其中要載入的 Kernel Module,除了主要參考資料提到的,在測試過程又發現一些狀況,因此有新增一些 Module 進來(標紅色的兩個)。下一段內容會紀錄一下曾經遇到的狀況排除
thatsme@swarm-in-lxd:~$ lxc config set swarm1 security.privileged true
thatsme@swarm-in-lxd:~$ lxc config set swarm1 security.nesting true
thatsme@swarm-in-lxd:~$ lxc config set swarm1 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc config set swarm2 security.privileged true
thatsme@swarm-in-lxd:~$ lxc config set swarm2 security.nesting true
thatsme@swarm-in-lxd:~$ lxc config set swarm2 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc config set swarm3 security.privileged true
thatsme@swarm-in-lxd:~$ lxc config set swarm3 security.nesting true
thatsme@swarm-in-lxd:~$ lxc config set swarm3 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ 

另外一個重點:進入所有 Container 的根目錄增加一個 .dockerenv 檔案,告訴 Docker 他是生存在 Contaienr 環境裡面(Note:不論是 LXC/LXD 還是 Docker,大家都是 Contaienr 喔~~)
另外,這邊先不切入 Contaienr 裡面,而是直接用 lxc exec 執行,要執行的指令在 -- 選項的後面~
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- touch /.dockerenv
thatsme@swarm-in-lxd:~$ lxc exec swarm2 -- touch /.dockerenv
thatsme@swarm-in-lxd:~$ lxc exec swarm3 -- touch /.dockerenv

然後就是安裝 docker:這邊一樣用 lxc exec 處理,然後 yum 指令的輸出太長了,就省略~
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- curl https://download.docker.com/linux/centos/docker-ce.repo --output /etc/yum.repos.d/docker-ce.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2424  100  2424    0     0   5539      0 --:--:-- --:--:-- --:--:--  5559
thatsme@swarm-in-lxd:~$ lxc exec swarm2 -- curl https://download.docker.com/linux/centos/docker-ce.repo --output /etc/yum.repos.d/docker-ce.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2424  100  2424    0     0   5753      0 --:--:-- --:--:-- --:--:--  5771
thatsme@swarm-in-lxd:~$ lxc exec swarm3 -- curl https://download.docker.com/linux/centos/docker-ce.repo --output /etc/yum.repos.d/docker-ce.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2424  100  2424    0     0  14447      0 --:--:-- --:--:-- --:--:-- 14690
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- yum install -y docker-ce
輸出略。。。
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- chkconfig docker on
Note: Forwarding request to 'systemctl enable docker.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- service docker start
Redirecting to /bin/systemctl start docker.service
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm2 -- yum install -y docker-ce
輸出略。。。
thatsme@swarm-in-lxd:~$ lxc exec swarm2 -- chkconfig docker on
Note: Forwarding request to 'systemctl enable docker.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm2 -- service docker start
Redirecting to /bin/systemctl start docker.service
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm3 -- yum install -y docker-ce
輸出略。。。
thatsme@swarm-in-lxd:~$ lxc exec swarm3 -- chkconfig docker on
Note: Forwarding request to 'systemctl enable docker.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc exec swarm3 -- service docker start
Redirecting to /bin/systemctl start docker.service
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc list
+--------+---------+----------------------+-----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |         IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+-----------------------------------------------+------------+-----------+
| swarm1 | RUNNING | 172.17.0.1 (docker0) | fd42:aa42:2723:a7ba:216:3eff:fe95:2bd (eth0)  | PERSISTENT | 0         |
|        |         | 10.55.87.247 (eth0)  |                                               |            |           |
+--------+---------+----------------------+-----------------------------------------------+------------+-----------+
| swarm2 | RUNNING | 172.17.0.1 (docker0) | fd42:aa42:2723:a7ba:216:3eff:feaa:17e8 (eth0) | PERSISTENT | 0         |
|        |         | 10.55.87.205 (eth0)  |                                               |            |           |
+--------+---------+----------------------+-----------------------------------------------+------------+-----------+
| swarm3 | RUNNING | 172.17.0.1 (docker0) | fd42:aa42:2723:a7ba:216:3eff:fead:376b (eth0) | PERSISTENT | 0         |
|        |         | 10.55.87.197 (eth0)  |                                               |            |           |
+--------+---------+----------------------+-----------------------------------------------+------------+-----------+
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:          1.6Gi       454Mi       109Mi        25Mi       1.1Gi       1.0Gi
Swap:            0B          0B          0B
thatsme@swarm-in-lxd:~$ 

順帶一提,上面這樣開三個節點,才使用 450MB 的記憶體而已~

接著就是個別處理了~
這邊要把 swarm1 初始化成 Swarm Manager 節點。不過這邊因為網卡界面有一點多 IPv6 的位置。。。因此直接指定 IPv4 的 IP 即可
thatsme@swarm-in-lxd:~$ lxc shell swarm1
[root@swarm1 ~]# docker swarm init --advertise-addr eth0
Error response from daemon: interface eth0 has more than one IPv6 address (fd42:aa42:2723:a7ba:216:3eff:fe95:2bd and fe80::216:3eff:fe95:2bd)
[root@swarm1 ~]# 
[root@swarm1 ~]# ip -4 a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: docker0:  mtu 1500 qdisc noqueue state DOWN group default 
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: eth0@if5:  mtu 1500 qdisc noqueue state UP group default qlen 1000 link-netnsid 0
    inet 10.55.87.247/24 brd 10.55.87.255 scope global dynamic eth0
       valid_lft 3497sec preferred_lft 3497sec
[root@swarm1 ~]# 
[root@swarm1 ~]# docker swarm init --advertise-addr 10.55.87.247
Swarm initialized: current node (u76luy3vcb93otpatn9aquspv) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-2uj9nenza513453xnjb1id18plygjqde22muprwnosttf6fx8k-0vwj3v4wqojltj3jmjo33artd 10.55.87.247:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@swarm1 ~]# 
[root@swarm1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
u76luy3vcb93otpatn9aquspv *   swarm1              Ready               Active              Leader              19.03.2
[root@swarm1 ~]# 

再到 swarm2 跟 swarm3 裡面加入 swarm1 的叢集:這邊要直接複製上面 swarm1 初始化完畢所顯示的指令。
thatsme@swarm-in-lxd:~$ lxc shell swarm2
[root@swarm2 ~]# docker swarm join --token SWMTKN-1-2uj9nenza513453xnjb1id18plygjqde22muprwnosttf6fx8k-0vwj3v4wqojltj3jmjo33artd 10.55.87.247:2377
This node joined a swarm as a worker.
[root@swarm2 ~]# 
thatsme@swarm-in-lxd:~$ lxc shell swarm3
[root@swarm3 ~]# docker swarm join --token SWMTKN-1-2uj9nenza513453xnjb1id18plygjqde22muprwnosttf6fx8k-0vwj3v4wqojltj3jmjo33artd 10.55.87.247:2377
This node joined a swarm as a worker.
[root@swarm3 ~]# 

此時查看狀態,會發現 LXD 顯示的變化,以及 Swarm node 狀態
thatsme@swarm-in-lxd:~$ lxc list
+--------+---------+------------------------------+-----------------------------------------------+------------+-----------+
|  NAME  |  STATE  |             IPV4             |                     IPV6                      |    TYPE    | SNAPSHOTS |
+--------+---------+------------------------------+-----------------------------------------------+------------+-----------+
| swarm1 | RUNNING | 172.18.0.1 (docker_gwbridge) | fd42:aa42:2723:a7ba:216:3eff:fe95:2bd (eth0)  | PERSISTENT | 0         |
|        |         | 172.17.0.1 (docker0)         |                                               |            |           |
|        |         | 10.55.87.247 (eth0)          |                                               |            |           |
+--------+---------+------------------------------+-----------------------------------------------+------------+-----------+
| swarm2 | RUNNING | 172.18.0.1 (docker_gwbridge) | fd42:aa42:2723:a7ba:216:3eff:feaa:17e8 (eth0) | PERSISTENT | 0         |
|        |         | 172.17.0.1 (docker0)         |                                               |            |           |
|        |         | 10.55.87.205 (eth0)          |                                               |            |           |
+--------+---------+------------------------------+-----------------------------------------------+------------+-----------+
| swarm3 | RUNNING | 172.18.0.1 (docker_gwbridge) | fd42:aa42:2723:a7ba:216:3eff:fead:376b (eth0) | PERSISTENT | 0         |
|        |         | 172.17.0.1 (docker0)         |                                               |            |           |
|        |         | 10.55.87.197 (eth0)          |                                               |            |           |
+--------+---------+------------------------------+-----------------------------------------------+------------+-----------+
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ lxc shell swarm1
[root@swarm1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
u76luy3vcb93otpatn9aquspv *   swarm1              Ready               Active              Leader              19.03.2
44tgut6jyhhtyecwxqeyqszr4     swarm2              Ready               Active                                  19.03.2
wre3nl1dqbn9vptlnn7syliu8     swarm3              Ready               Active                                  19.03.2
[root@swarm1 ~]# 

然後就可以建立服務了。我們這邊參考一點 docker swarm 教學,並選 Nginx 來測:因為戳 80 Port 很容易~
[root@swarm1 ~]# docker service create --replicas 3 --name nginxes -p 80:80 nginx:alpine
fj0nf2q8y62jourrob83jixfy
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 
[root@swarm1 ~]# 
[root@swarm1 ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
fj0nf2q8y62j        nginxes             replicated          3/3                 nginx:alpine        *:80->80/tcp
[root@swarm1 ~]# 
[root@swarm1 ~]# docker service inspect --pretty nginxes

ID:             fj0nf2q8y62jourrob83jixfy
Name:           nginxes
Service Mode:   Replicated
 Replicas:      3
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         nginx:alpine@sha256:99be6ae8d32943b676031b3513782ad55c8540c1d040b1f7b8c335c67a241b06
 Init:          false
Resources:
Endpoint Mode:  vip
Ports:
 PublishedPort = 80
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress 

[root@swarm1 ~]# 
[root@swarm1 ~]# docker service ps nginxes
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
dg5846odml40        nginxes.1           nginx:alpine        swarm2              Running             Running 4 minutes ago                        
y1ipb44s45n1         \_ nginxes.1       nginx:alpine        swarm2              Shutdown            Complete 4 minutes ago                       
lbu3xpwllg10        nginxes.2           nginx:alpine        swarm3              Running             Running 4 minutes ago                        
qdr3cyon4wsp         \_ nginxes.2       nginx:alpine        swarm3              Shutdown            Complete 4 minutes ago                       
y99kcpv72gj6        nginxes.3           nginx:alpine        swarm1              Running             Running 4 minutes ago                        
vejbfppwv7oo         \_ nginxes.3       nginx:alpine        swarm1              Shutdown            Complete 4 minutes ago                       
[root@swarm1 ~]# 
[root@swarm1 ~]# docker container ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS               NAMES
9dff4ec83cea        nginx:alpine        "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes               80/tcp              nginxes.3.y99kcpv72gj6off3m510nkb5k
cdf643919c12        nginx:alpine        "nginx -g 'daemon of…"   20 minutes ago      Exited (0) 5 minutes ago                       nginxes.3.vejbfppwv7oowlv18ok96l6qt
[root@swarm1 ~]# 
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.247
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:28 UTC
Nmap scan report for 10.55.87.247
Host is up (0.00013s latency).
Not shown: 65532 closed ports
PORT     STATE SERVICE
80/tcp   open  http
2377/tcp open  swarm
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 1.63 seconds
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.205
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:29 UTC
Nmap scan report for 10.55.87.205
Host is up (0.00012s latency).
Not shown: 65533 closed ports
PORT     STATE SERVICE
80/tcp   open  http
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 1.63 seconds
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.197
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:29 UTC
Nmap scan report for 10.55.87.197
Host is up (0.00012s latency).
Not shown: 65533 closed ports
PORT     STATE SERVICE
80/tcp   open  http
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 1.69 seconds
thatsme@swarm-in-lxd:~$ 

然後確認一下 curl 從三個節點去取得網頁看看
thatsme@swarm-in-lxd:~$ curl 10.55.87.247:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ curl 10.55.87.205:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ curl 10.55.87.197:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
thatsme@swarm-in-lxd:~$ 

接著縮減一個服務起來~這邊想偷懶,這邊就直接用 lxc exec 來執行,然後這次換用 telnet 來戳就好惹~~
thatsme@swarm-in-lxd:~$ lxc exec swarm1 -- docker service scale nginxes=2
nginxes scaled to 2
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ telnet 10.55.87.247 80
Trying 10.55.87.247...
Connected to 10.55.87.247.
Escape character is '^]'.
^CConnection closed by foreign host.
thatsme@swarm-in-lxd:~$ telnet 10.55.87.205 80
Trying 10.55.87.205...
Connected to 10.55.87.205.
Escape character is '^]'.
^CConnection closed by foreign host.
thatsme@swarm-in-lxd:~$ telnet 10.55.87.197 80
Trying 10.55.87.197...
Connected to 10.55.87.197.
Escape character is '^]'.
^CConnection closed by foreign host.
thatsme@swarm-in-lxd:~$ 

這邊可以觀察到,服務縮減成 2 個,但三個節點都還是可以存取到,這聽說就是 docker Swarm 的功能~~

正常測試就做到這。下面是除錯紀錄。





上面提到,如果沒有載入充足的 Kernel Module,會發生失敗。這邊紀錄找出這些異常的過程。
在上面載入 Kernel Module 時,有載入完整的必要模組,才能正常使用 Docker Swarm。
參考資料中,「僅」匯入以下的模組:少了 xt_conntrack,ip_vs 這兩個
thatsme@swarm-in-lxd:~$ lxc config set swarm1 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter"
thatsme@swarm-in-lxd:~$ lxc config set swarm2 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter"
thatsme@swarm-in-lxd:~$ lxc config set swarm3 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter"

在 Docker 啟動好,但還沒有設置 Docker Swarm 的話,參考資料載入的節點是可以正常使用的(單一個 docker engine),儘管 Container OS message 還是有一點抱怨。外面也是看得到這個 Port。
[root@swarm1 ~]# grep -i error /var/log/messages 
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.568403014Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.582750371Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.584814807Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585483853Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585919646Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585937944Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.727918108Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.729247965Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
[root@swarm1 ~]# 
[root@swarm1 ~]# docker run -dit --rm --name testweb -p 8080:80 nginx:alpine
Unable to find image 'nginx:alpine' locally
alpine: Pulling from library/nginx
9d48c3bd43c5: Pull complete 
b6dac14ba0a9: Pull complete 
Digest: sha256:99be6ae8d32943b676031b3513782ad55c8540c1d040b1f7b8c335c67a241b06
Status: Downloaded newer image for nginx:alpine
ab029894c44f991ecbed3598c11035687c25e344bf4fa33cacbfcef4bc33d49a
[root@swarm1 ~]# 
[root@swarm1 ~]# docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
ab029894c44f        nginx:alpine        "nginx -g 'daemon of…"   56 seconds ago      Up 53 seconds       0.0.0.0:8080->80/tcp   testweb
[root@swarm1 ~]#
[root@swarm1 ~]# curl 10.55.87.247:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@swarm1 ~]# exit
logout
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.247
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:10 UTC
Nmap scan report for 10.55.87.247
Host is up (0.00011s latency).
Not shown: 65532 closed ports
PORT     STATE SERVICE
2377/tcp open  swarm
7946/tcp open  unknown
8080/tcp open  http-proxy

Nmap done: 1 IP address (1 host up) scanned in 1.56 seconds
thatsme@swarm-in-lxd:~$ 

為了方便起見,我這邊把上面的測試 Container 停掉了~

當 docker Swarm 啟動完畢後,OS message 就會出現一些訊息了
[root@swarm1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
u76luy3vcb93otpatn9aquspv *   swarm1              Ready               Active              Leader              19.03.2
44tgut6jyhhtyecwxqeyqszr4     swarm2              Ready               Active                                  19.03.2
wre3nl1dqbn9vptlnn7syliu8     swarm3              Ready               Active                                  19.03.2
[root@swarm1 ~]# 
[root@swarm1 ~]# grep -i error /var/log/messages  
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.568403014Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.582750371Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.584814807Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585483853Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585919646Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585937944Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.727918108Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.729247965Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
Sep  6 06:31:48 swarm1 dockerd: time="2019-09-06T06:31:48.105908163Z" level=error msg="Error initializing swarm: interface eth0 has more than one IPv6 address (fd42:aa42:2723:a7ba:216:3eff:fe95:2bd and fe80::216:3eff:fe95:2bd)"
Sep  6 06:49:26 swarm1 dockerd: time="2019-09-06T06:49:26.595619170Z" level=error msg="Error initializing swarm: interface eth0 has more than one IPv6 address (fd42:aa42:2723:a7ba:216:3eff:fe95:2bd and fe80::216:3eff:fe95:2bd)"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850644223Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh1" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh1: no such file or directory"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850673399Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh2" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh2: no such file or directory"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850706507Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh3" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh3: no such file or directory"
Sep  6 06:50:44 swarm1 dockerd: time="2019-09-06T06:50:44.954406416Z" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
Sep  6 06:50:45 swarm1 dockerd: time="2019-09-06T06:50:45Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 06:50:45 swarm1 dockerd: time="2019-09-06T06:50:45Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
[root@swarm1 ~]# 

這時如上面一樣,在 Swarm 啟動一組 Nginx 服務,但是。。。確沒看到服務。。。
[root@swarm1 ~]# docker service create --replicas 3 --name nginxes -p 80:80 nginx:alpine
fj0nf2q8y62jourrob83jixfy
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 
[root@swarm1 ~]# 
[root@swarm1 ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
fj0nf2q8y62j        nginxes             replicated          3/3                 nginx:alpine        *:80->80/tcp
[root@swarm1 ~]# 
[root@swarm1 ~]# curl 10.55.87.247:80  
curl: (7) Failed connect to 10.55.87.247:80; Connection refused
[root@swarm1 ~]# exit
logout
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.247
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:20 UTC
Nmap scan report for 10.55.87.247
Host is up (0.00011s latency).
Not shown: 65533 closed ports
PORT     STATE SERVICE
2377/tcp open  swarm
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 1.68 seconds
thatsme@swarm-in-lxd:~$ 

看 Container OS message Log 就發現狀況了~
[root@swarm1 ~]# grep -i error /var/log/messages 
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.568403014Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.582750371Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.584814807Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585483853Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "": exit status 1"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585919646Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Sep  6 01:45:52 swarm1 containerd: time="2019-09-06T01:45:52.585937944Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.727918108Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 01:45:52 swarm1 dockerd: time="2019-09-06T01:45:52.729247965Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
Sep  6 06:31:48 swarm1 dockerd: time="2019-09-06T06:31:48.105908163Z" level=error msg="Error initializing swarm: interface eth0 has more than one IPv6 address (fd42:aa42:2723:a7ba:216:3eff:fe95:2bd and fe80::216:3eff:fe95:2bd)"
Sep  6 06:49:26 swarm1 dockerd: time="2019-09-06T06:49:26.595619170Z" level=error msg="Error initializing swarm: interface eth0 has more than one IPv6 address (fd42:aa42:2723:a7ba:216:3eff:fe95:2bd and fe80::216:3eff:fe95:2bd)"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850644223Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh1" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh1: no such file or directory"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850673399Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh2" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh2: no such file or directory"
Sep  6 06:50:43 swarm1 dockerd: time="2019-09-06T06:50:43.850706507Z" level=error msg="error reading the kernel parameter net.ipv4.neigh.default.gc_thresh3" error="open /proc/sys/net/ipv4/neigh/default/gc_thresh3: no such file or directory"
Sep  6 06:50:44 swarm1 dockerd: time="2019-09-06T06:50:44.954406416Z" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory"
Sep  6 06:50:45 swarm1 dockerd: time="2019-09-06T06:50:45Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 06:50:45 swarm1 dockerd: time="2019-09-06T06:50:45Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
Sep  6 07:03:36 swarm1 dockerd: time="2019-09-06T07:03:36.511633260Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Sep  6 07:03:36 swarm1 dockerd: time="2019-09-06T07:03:36.512532442Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
Sep  6 07:10:05 swarm1 dockerd: time="2019-09-06T07:10:05.767511732Z" level=warning msg="memberlist: Failed to send error: write tcp 10.55.87.247:7946->10.55.87.1:44672: write: broken pipe from=10.55.87.1:44672"
Sep  6 07:12:23 swarm1 dockerd: time="2019-09-06T07:12:23Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 07:12:23 swarm1 dockerd: time="2019-09-06T07:12:23Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
Sep  6 07:12:24 swarm1 dockerd: time="2019-09-06T07:12:24Z" level=warning msg="Running modprobe nf_nat failed with message: ``, error: exit status 1"
Sep  6 07:12:24 swarm1 dockerd: time="2019-09-06T07:12:24Z" level=warning msg="Running modprobe xt_conntrack failed with message: ``, error: exit status 1"
Sep  6 07:12:24 swarm1 dockerd: time="2019-09-06T07:12:24.756573482Z" level=warning msg="Running modprobe ip_vs failed with message: ``, error: exit status 1"
Sep  6 07:12:24 swarm1 dockerd: time="2019-09-06T07:12:24.760614049Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."
[root@swarm1 ~]# 
[root@swarm1 ~]# grep -i loadbalancing /var/log/messages 
Sep  6 07:12:24 swarm1 dockerd: time="2019-09-06T07:12:24.760614049Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."
[root@swarm1 ~]# 

上面寫到因為缺少了一些模組,所以 Swarm 的 Load Balancing(聽說叫做 Serivice Mesh?有錯請指教~)無法作用。

因此,這邊就像上面正確步驟一樣,設定正確的載入模組,然後重啟這些 Container 就行了~
thatsme@swarm-in-lxd:~$ lxc config set swarm1 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ lxc config set swarm2 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ lxc config set swarm3 linux.kernel_modules "aufs,zfs,overlay,nf_nat,br_netfilter,xt_conntrack,ip_vs"
thatsme@swarm-in-lxd:~$ lxc restart swarm1
Remapping container filesystem
thatsme@swarm-in-lxd:~$ lxc restart swarm2
Remapping container filesystem
thatsme@swarm-in-lxd:~$ lxc restart swarm3
Remapping container filesystem
thatsme@swarm-in-lxd:~$ 
thatsme@swarm-in-lxd:~$ nmap -p- 10.55.87.247
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-06 07:28 UTC
Nmap scan report for 10.55.87.247
Host is up (0.00013s latency).
Not shown: 65532 closed ports
PORT     STATE SERVICE
80/tcp   open  http
2377/tcp open  swarm
7946/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 1.63 seconds
thatsme@swarm-in-lxd:~$ 

問題排解完畢~~

沒有留言:

張貼留言