不過這邊要使用比較 DIY 的方式演練,主要依照這篇進行設置 Medium: Setup Minikube inside LXC Container to create as many one-node Kubernetes clusters as you want.。不過這邊不打包 LXD Image,而是直接設置。另外相似的教學有 Run Minikube in LXD · GitHub 與 Kubernetes Lxd - Awesome Open Source 這兩篇,可以交互參照。
成功測試的版本紀錄
這邊一樣選 Ubuntu 19 Minimal 來安裝 LXD 3 環境,設置方式請參考之前筆記。
VM 規格選擇成 2 Core,4GB,這是 Minikube 要求的最小規格。不過筆記中使用的環境是一台 2 Core,6GB 的舊電腦~
Minikube 把一個 K8S 的 Master/Worker 濃縮在同一節點上。設置完畢後,只會有一個節點運作。
由於會需要載入 Linux Kernel 相關的參數檔案,這邊使用 Ubuntu LXC Image,與 Host 的作業系統版本對齊,而不使用 CentOS LXC Image,以便方便作 Device Mapping(相關目錄的對應)。
並參照這個 Config https://github.com/cdelaitre/lxc-minikube/blob/master/lxc-profile-minikube 編輯 LXC Container 額外設定:Device 沒有設定會錯誤。
注意裡面會填寫 Linux Kernel Boot Img 的檔案位置資訊,要隨著當前的 Kernel 版本修改。
omniware@omniware-VGN-Z46TD-B:~$ lxc launch images:ubuntu/18.04 minikube Creating minikube Starting minikube omniware@omniware-VGN-Z46TD-B:~$ lxc config set minikube security.privileged true omniware@omniware-VGN-Z46TD-B:~$ lxc config set minikube security.nesting true omniware@omniware-VGN-Z46TD-B:~$ lxc config set minikube linux.kernel_modules "aufs,zfs,ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,overlay,nf_nat,br_netfilter" omniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ uname -a Linux omniware-VGN-Z46TD-B 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux omniware@omniware-VGN-Z46TD-B:~$ ls /boot/ config-4.15.0-88-generic initrd.img-4.15.0-88-generic vmlinuz-4.15.0-88-generic config-4.15.0-91-generic initrd.img-4.15.0-91-generic vmlinuz-4.15.0-91-generic efi System.map-4.15.0-88-generic grub System.map-4.15.0-91-generic omniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ lxc config set minikube raw.lxc "lxc.apparmor.profile=unconfined" omniware@omniware-VGN-Z46TD-B:~$ lxc config show minikube architecture: x86_64 config: image.architecture: amd64 image.description: Ubuntu bionic amd64 (20200329_07:42) image.os: Ubuntu image.release: bionic image.serial: "20200329_07:42" linux.kernel_modules: aufs,zfs,ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,overlay,nf_nat,br_netfilter raw.lxc: lxc.apparmor.profile=unconfined security.nesting: "true" security.privileged: "true" volatile.base_image: 4686d5cd130b5c0ecf2f29bd37afdb0cf16fa6f1f45edfa4ab04bbcf3a8fc68d volatile.eth0.hwaddr: 00:16:3e:63:39:d0 volatile.idmap.base: "0" volatile.idmap.next: '[]' volatile.last_state.idmap: '[]' volatile.last_state.power: RUNNING devices: {} ephemeral: false profiles: - default stateful: false description: "" omniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ lxc config edit minikubeomniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ lxc config show minikube architecture: x86_64 config: image.architecture: amd64 image.description: Ubuntu bionic amd64 (20200329_07:42) image.os: Ubuntu image.release: bionic image.serial: "20200329_07:42" linux.kernel_modules: aufs,zfs,ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,overlay,nf_nat,br_netfilter raw.lxc: | lxc.apparmor.profile=unconfined lxc.mount.auto=proc:rw sys:rw cgroup:rw lxc.cgroup.devices.allow=a lxc.cap.drop= security.nesting: "true" security.privileged: "true" volatile.base_image: 4686d5cd130b5c0ecf2f29bd37afdb0cf16fa6f1f45edfa4ab04bbcf3a8fc68d volatile.eth0.hwaddr: 00:16:3e:63:39:d0 volatile.idmap.base: "0" volatile.idmap.next: '[]' volatile.last_state.idmap: '[]' volatile.last_state.power: RUNNING devices: aadisable: path: /sys/module/apparmor/parameters/enabled source: /dev/null type: disk aadisable2: path: /sys/module/nf_conntrack/parameters/hashsize source: /sys/module/nf_conntrack/parameters/hashsize type: disk aadisable3: path: /dev/kmsg source: /dev/kmsg type: disk bootenable: path: /boot/config-4.15.0-91-generic source: /boot/config-4.15.0-91-generic type: disk modenable: path: /lib/modules source: /lib/modules type: disk ephemeral: false profiles: - default stateful: false description: Minikube omniware@omniware-VGN-Z46TD-B:~$ omniware@omniware-VGN-Z46TD-B:~$ lxc restart minikube omniware@omniware-VGN-Z46TD-B:~$
進入 Container 裡面,開始安裝 Minikube:
先安裝普通的 docker 引擎。注意在 docker Swarm 設置時,會用 touch /.dockerfile 告知目前是 Container 環境,但這邊不用作這一步。
順帶提一下, conntrack 套件是 Minikube 1.8 之後才要求的。
omniware@omniware-VGN-Z46TD-B:~$ lxc shell minikube
mesg: ttyname failed: No such device
root@minikube:~# apt-get install -y apt-transport-https ca-certificates conntrack curl gnupg-agent openssh-server software-properties-common
Reading package lists... Done
Building dependency tree
Reading state information... Done
中間略...
root@minikube:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
OK
root@minikube:~#
root@minikube:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Get:1 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:4 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [9,594 B]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Fetched 251 kB in 1s (330 kB/s)
Reading package lists... Done
root@minikube:~# apt-get install -y docker-ce docker-ce-cli containerd.io
... 略 ...
root@minikube:~#
此時的 LXC 狀態
omniware@omniware-VGN-Z46TD-B:~$ lxc list minikube +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | minikube | RUNNING | 172.17.0.1 (docker0) | fd42:b925:969f:cf73:216:3eff:fe63:39d0 (eth0) | PERSISTENT | 0 | | | | 10.236.247.236 (eth0) | | | | +----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ omniware@omniware-VGN-Z46TD-B:~$
接著,再取得 Minikube K8s:這邊不使用 deb 套件安裝 K8s 有關套件,直接用 Minikube 執行檔進行。取得後放到系統預設的 /usr/local/bin/ 路徑
root@minikube:~# curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 46.3M 100 46.3M 0 0 70.2M 0 --:--:-- --:--:-- --:--:-- 70.2M root@minikube:~# root@minikube:~# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 44.5M 100 44.5M 0 0 93.5M 0 --:--:-- --:--:-- --:--:-- 93.3M root@minikube:~# chmod +x ./minikube ./kubectl root@minikube:~# mv ./minikube ./kubectl /usr/local/bin/ root@minikube:~# root@minikube:~# minikube version minikube version: v1.9.0 commit: 48fefd43444d2f8852f527c78f0141b377b1e42a root@minikube:~# root@minikube:~# kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} root@minikube:~#
然後,初始化 Minikube
omniware@omniware-VGN-Z46TD-B:~$ lxc shell minikube mesg: ttyname failed: No such device root@minikube:~# minikube start --apiserver-name=lxd-minikube --vm-driver=none 😄 minikube v1.9.0 on Ubuntu 18.04 ✨ Using the none driver based on user configuration 🤹 Running on localhost (CPUs=2, Memory=5891MB, Disk=293449MB) ... ℹ️ OS release is Ubuntu 18.04.4 LTS ❗ Node may be unable to resolve external DNS records 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.8 ... ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s > kubeadm: 37.96 MiB / 37.96 MiB [---------------] 100.00% 4.08 MiB p/s 10s > kubectl: 41.98 MiB / 41.98 MiB [---------------] 100.00% 3.56 MiB p/s 12s > kubelet: 108.01 MiB / 108.01 MiB [-------------] 100.00% 6.37 MiB p/s 18s 🌟 Enabling addons: default-storageclass, storage-provisioner ❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get https://lxd-minikube:8443/apis/storage.k8s.io/v1/storageclasses: dial tcp: lookup lxd-minikube: Temporary failure in name resolution] 🤹 Configuring local host environment ... ❗ The 'none' driver provides limited isolation and may reduce system security and reliability. ❗ For more information, see: 👉 https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ❗ kubectl and minikube configuration will be stored in /root ❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: ▪ sudo mv /root/.kube /root/.minikube $HOME ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube 💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true E0329 16:37:08.542591 2573 kubeadm.go:346] Overriding stale ClientConfig host https://lxd-minikube:8443 with https://10.236.247.236:8443 🏄 Done! kubectl is now configured to use "minikube" root@minikube:~#
雖然初始化完成了,但此時由於 K8s 內部 DNS 服務無法正常運作,需要把 minikube 重啟一下。
注意這邊有兩組管理架構:minikube 以及 kubectl,其中 kubectl 才真正對照到 K8s 的操作
root@minikube:~# kubectl get nodes Unable to connect to the server: dial tcp: lookup lxd-minikube on 127.0.0.53:53: server misbehaving root@minikube:~# root@minikube:~# minikube stop ✋ Stopping "minikube" in none ... 🛑 Node "" stopped. root@minikube:~# minikube start 😄 minikube v1.9.0 on Ubuntu 18.04 ✨ Using the none driver based on existing profile 🔄 Retarting existing none bare metal machine for "minikube" ... ℹ️ OS release is Ubuntu 18.04.4 LTS ❗ Node may be unable to resolve external DNS records 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.8 ... ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf 🌟 Enabling addons: default-storageclass, storage-provisioner 🤹 Configuring local host environment ... ❗ The 'none' driver provides limited isolation and may reduce system security and reliability. ❗ For more information, see: 👉 https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ❗ kubectl and minikube configuration will be stored in /root ❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: ▪ sudo mv /root/.kube /root/.minikube $HOME ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube 💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 🏄 Done! kubectl is now configured to use "minikube" root@minikube:~# root@minikube:~# kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 2m41s v1.18.0 root@minikube:~# root@minikube:~# kubectl cluster-info Kubernetes master is running at https://10.236.247.236:8443 KubeDNS is running at https://10.236.247.236:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. root@minikube:~# root@minikube:~# kubectl get namespace NAME STATUS AGE default Active 15h kube-node-lease Active 15h kube-public Active 15h kube-system Active 15h root@minikube:~# root@minikube:~# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE k8snginx 1/1 1 1 98s root@minikube:~# root@minikube:~# kubectl get pods No resources found in default namespace. root@minikube:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-2nwmd 1/1 Running 1 15h kube-system coredns-66bff467f8-54rxr 1/1 Running 1 15h kube-system etcd-minikube 1/1 Running 1 15h kube-system kube-apiserver-minikube 1/1 Running 1 15h kube-system kube-controller-manager-minikube 1/1 Running 1 15h kube-system kube-proxy-t4vb6 1/1 Running 1 15h kube-system kube-scheduler-minikube 1/1 Running 1 15h kube-system storage-provisioner 1/1 Running 1 15h root@minikube:~# root@minikube:~# minikube status host: Running kubelet: Running apiserver: Running kubeconfig: Configured root@minikube:~# root@minikube:~# minikube service list |-------------|------------|--------------|-----| | NAMESPACE | NAME | TARGET PORT | URL | |-------------|------------|--------------|-----| | default | kubernetes | No node port | | kube-system | kube-dns | No node port | |-------------|------------|--------------|-----| root@minikube:~#
稍微試著啟動一組 Pod:這時候只能透過 Pod URL 跟他互動
root@minikube:~# kubectl run k8snginx --image=nginx:alpine --port=80 pod/k8snginx created root@minikube:~# root@minikube:~# kubectl get pods NAME READY STATUS RESTARTS AGE k8snginx 1/1 Running 0 12s root@minikube:~# kubectl get pods k8snginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES k8snginx 1/1 Running 0 12m 172.17.0.4 minikube <none> <none> root@minikube:~# root@minikube:~# curl 172.17.0.4:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@minikube:~# root@minikube:~# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h root@minikube:~# root@minikube:~# curl 127.0.0.1:80 curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused root@minikube:~#
將 Pod 發布出來:但這樣只有將服務公佈在 K8s 叢集裡面,外面(需要透過真正的主機 IP)還是看不到
root@minikube:~# kubectl expose pod k8snginx --port=80 service/k8snginx exposed root@minikube:~# root@minikube:~# root@minikube:~# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8snginx ClusterIP 10.103.195.43 <none> 80/TCP 54s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h root@minikube:~# root@minikube:~# kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR k8snginx ClusterIP 10.109.25.118 <none> 80/TCP 47s run=k8snginx kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h <none> root@minikube:~# root@minikube:~# minikube service k8snginx --url 😿 service default/k8snginx has no node port root@minikube:~# root@minikube:~# ip -4 a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 28: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link-netnsid 0 inet 10.236.247.236/24 brd 10.236.247.255 scope global dynamic eth0 valid_lft 2190sec preferred_lft 2190sec root@minikube:~# root@minikube:~# root@minikube:~# curl 127.0.0.1:80 curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused root@minikube:~# curl 10.103.195.43:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@minikube:~#
讓服務直接透過 Node (也就是 LXD Container 本身)轉發出來
root@minikube:~# kubectl delete service k8snginx service "k8snginx" deleted root@minikube:~# root@minikube:~# kubectl expose pod k8snginx --type="NodePort" --port=80 service/k8snginx exposed root@minikube:~# root@minikube:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES k8snginx 1/1 Running 0 37m 172.17.0.4 minikube <none> <none> root@minikube:~# root@minikube:~# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8snginx NodePort 10.109.173.247 <none> 80:31590/TCP 20s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h root@minikube:~# root@minikube:~# kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR k8snginx NodePort 10.109.173.247 <none> 80:31590/TCP 57s run=k8snginx kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h <none> root@minikube:~# root@minikube:~# curl 10.236.247.236:31590 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@minikube:~# root@minikube:~# curl 127.0.0.1:31590 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> root@minikube:~#
omniware@omniware-VGN-Z46TD-B:~$ lxc list minikube
+----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| minikube | RUNNING | 172.17.0.1 (docker0) | fd42:b925:969f:cf73:216:3eff:fe63:39d0 (eth0) | PERSISTENT | 0 |
| | | 10.236.247.236 (eth0) | | | |
+----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
omniware@omniware-VGN-Z46TD-B:~$
omniware@omniware-VGN-Z46TD-B:~$ nmap -p- 10.236.247.236
Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-30 16:47 CST
Nmap scan report for 10.236.247.236
Host is up (0.00027s latency).
Not shown: 65526 closed ports
PORT STATE SERVICE
22/tcp open ssh
2379/tcp open etcd-client
2380/tcp open etcd-server
8443/tcp open https-alt
10250/tcp open unknown
10251/tcp open unknown
10252/tcp open apollo-relay
10256/tcp open unknown
31590/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 3.07 seconds
omniware@omniware-VGN-Z46TD-B:~$
這時候就可以看看 YAML 檔可以怎寫~
root@minikube:~# kubectl get service k8snginx -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2020-03-30T08:39:15Z" labels: run: k8snginx managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:run: {} f:spec: f:externalTrafficPolicy: {} f:ports: .: {} k:{"port":80,"protocol":"TCP"}: .: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:run: {} f:sessionAffinity: {} f:type: {} manager: kubectl operation: Update time: "2020-03-30T08:39:15Z" name: k8snginx namespace: default resourceVersion: "125081" selfLink: /api/v1/namespaces/default/services/k8snginx uid: 139d14fb-04b0-4041-ae4f-ead1385db6d2 spec: clusterIP: 10.111.128.150 externalTrafficPolicy: Cluster ports: - nodePort: 31817 port: 80 protocol: TCP targetPort: 80 selector: run: k8snginx sessionAffinity: None type: NodePort status: loadBalancer: {} root@minikube:~#
這個輸出的 YAML 檔就是更普遍的佈署 K8s 服務的用法,不過這樣的輸出內容可以簡化後再使用。
最後,可以看一下記憶體耗量,其實 Minikube 吃的資源蠻兇的。。
omniware@omniware-VGN-Z46TD-B:~$ lxc info minikube Name: minikube Remote: unix:// Architecture: x86_64 Created: 2020/03/29 15:50 UTC Status: Running Type: persistent Profiles: default Pid: 25486 Ips: docker0: inet 172.17.0.1 docker0: inet6 fe80::42:74ff:fe46:1fb4 eth0: inet 10.236.247.236 vethQGEYV9 eth0: inet6 fd42:b925:969f:cf73:216:3eff:fe63:39d0 vethQGEYV9 eth0: inet6 fe80::216:3eff:fe63:39d0 vethQGEYV9 lo: inet 127.0.0.1 lo: inet6 ::1 veth0ec60fa: inet6 fe80::5c89:bff:fedf:10fc veth4b04c7b: inet6 fe80::df:10ff:fee7:356d vethc14feb7: inet6 fe80::14b3:36ff:fe0d:fdf2 Resources: Processes: 365 CPU usage: CPU usage (in seconds): 10639 Memory usage: Memory (current): 1.86GB Memory (peak): 1.94GB Network usage: docker0: Bytes received: 16.97MB Bytes sent: 92.46MB Packets received: 246588 Packets sent: 262759 eth0: Bytes received: 709.51MB Bytes sent: 21.80MB Packets received: 601359 Packets sent: 347443 lo: Bytes received: 1.64GB Bytes sent: 1.64GB Packets received: 8705578 Packets sent: 8705578 veth0ec60fa: Bytes received: 10.21MB Bytes sent: 46.22MB Packets received: 123274 Packets sent: 131240 veth4b04c7b: Bytes received: 3.89kB Bytes sent: 3.11kB Packets received: 22 Packets sent: 43 vethc14feb7: Bytes received: 10.20MB Bytes sent: 46.24MB Packets received: 123198 Packets sent: 131499 omniware@omniware-VGN-Z46TD-B:~$
最簡易的佈署看起來還 OK,應該用起來是沒什麼問題了~
看起來 Kubernetes 要能夠在 LXC/LXD 裡面 Up and Run 真的不太容易~~
參考資料
過程中找太多網頁了,不外乎就是 Kubernetes 的教學。以下列舉直接攸關的。
設置 Minikube in LXD 環境
Run Minikube in LXD · GitHub
corneliusweig/kubernetes-lxd: A step-by-step guide to get kubernetes running inside an LXC container
Setup Minikube inside LXC Container to create as many one-node Kubernetes clusters as we want.
Installing to a local machine | Charmed Kubernetes documentation | Ubuntu
Kubernetes Lxd - Awesome Open Source
Tutorial Part 1: Kubernetes up and running on LXC/LXD - ITNEXT Medium : kubernetes
Tutorial Part 2: Kubernetes up and running on LXC/LXD - ITNEXT Medium : kubernetes
Run kubernetes inside LXC container - kvaps - Medium
将kubernetes跑在本地LXD容器中(by quqi99) - 技术并艺术着 - CSDN博客
Running Kubernetes inside LXD | Ubuntu将kubernetes跑在本地LXD容器中(by quqi99) - 技术并艺术着 - CSDN博客
Kubernetes cluster on LXD/LXC – Syed Sayem
Minikube in lxc - bat9r GitHub Gist
CDK installation with LXD on Ubuntu 18.04 (Bionic)— Cheat Sheet
Cluster-in-a-box: How to deploy one or more Kubernetes clusters to a single box. | Hacker Noon
Is there any plan to support MiniKube with LXD as VM driver (not container engin... | Hacker News
Tutorial Part 1: Kubernetes up and running on LXC/LXD - Medium : kubernetes
Install LXD for Linux using the Snap Store | Snapcraft
MInikube 簡易教學
Kubernetes 元件介紹與 minikube 安裝教學 - Soul & Shell Blog
Kubernetes 與 minikube 入門教學 - TechBridge 技術共筆部落格
18.04 - How to install Kubernetes Locally via Minikube - Ask Ubuntu
Installing Kubernetes with Minikube - Kubernetes
Install Minikube - Kubernetes
史帝芬心得筆記: 安裝 Minikube
Kubernetes--minikube安裝 使用CentOS 7 - Junior Note
Minikube Installation Guide for CentOS - xtivia
How to Use Minikube to Create Kubernetes Clusters and Deploy Applications - Container Journal
Getting started with Kubernetes and Docker with minikube - Medium
Learning Kubernetes: Getting Started with Minikube – Sweetcode.io
[Day 2] Minikube 安裝與配置 | Kubernetes 30天學習筆記系列 - iT 邦幫忙::一起幫忙解決難題,拯救 IT 人的一天
CentOS 7.5 安裝 Minikube ~ 不自量力 の Weithenn
使用 Minikube 来部署本地 kubernetes 多节点集群 - ShenHengheng's Blog (https://k2r2bai.com/2019/01/22/kubernetes/deploy/minikube-multi-node/)
沒有留言:
張貼留言