LOADING...
LOADING...
LOADING...
当前位置: 玩币族首页 > 区块链资讯 > 使用 kubeadm、containerd、gvisor 部署 Akash Provider

使用 kubeadm、containerd、gvisor 部署 Akash Provider

2021-11-17 Akash 来源:区块链网络


这篇文章将指导您使用 kubeadm 完成 Akash Provider 部署。

文章更新记录:

2021 年 7 月 12 日:最初使用 Akash 0.12.0 发布。

2021 年 10 月 30 日:针对 Akash 0.14.0 更新,多 master/worker节点设置。还通过非常简单的轮询 DNS A 记录方式,添加了 HA 支持。

一、介绍

这篇文章将指导您完成,在您自己的 Linux 发行版系统上运行 Akash Provider 所需的必要配置和设置步骤。(我使用的 x86_64 Ubuntu Focal)。 步骤中还包括注册和激活 Akash Provider。

我们将使用 containerd,因此你无需安装docker!

我没有像官方文档所建议的那样使用kubespray。因为我想对系统中的每个组件有更多的控制权,也不想安装 docker。

译者注:

Q:containerd 和 docker 是什么?他们有什么区别?

A:他们都是运行时组件,负责管理镜像和容器的生命周期。

作为 K8s 容器运行时,调用链的区别:

kubelet -> docker shim (在 kubelet 进程中) -> dockerd -> containerd

kubelet -> cri plugin(在 containerd 进程中) -> containerd

更详细的描述可以看这篇文章,很好的描述了他们之间的区别:https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci

二、准备工作

2.1 设置主机名

设置一个有意义的主机名:


hostnamectlset-hostnameakash-single.domainXYZ.com

如果您打算使用推荐的方式与 3 个 master 节点(控制平面)和 N 个 worker 节点,那么您可以像下面这样设置主机名:

强烈推荐:如果您要部署多 master 节点和 worker 节点,您可以使用如下主机名。

#master节点(控制平面)
akash-master-01.domainXYZ.com
akash-master-02.domainXYZ.com
akash-master-03.domainXYZ.com
#worker节点
akash-worker-01.domainXYZ.com
...
akash-worker-NN.domainXYZ.com

在下面的示例中,我使用了我在 Akash Provider 上的实际地址*.ingress.nixaid.com。在您的操作中,您需要将其替换为您的域名。

2.2 启用 netfilter 和内核 IP 转发(路由)

注:kube-proxy 需要启用net.bridge.bridge-nf-call-iptables;


cat<<EOF|tee/etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat<<EOF|tee/etc/sysctl.d/k8s.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF

modprobebr_netfilter
sysctl-p/etc/sysctl.d/k8s.conf

2.3 禁用 swap 分区

建议禁用并删除交换文件。

swapon-s
swapoff-a
sed-i'/swap/d'/etc/fstab
rm/swapfile

译者注:

Q:什么是 swap 分区?为什么要禁用 swap 分区?

A:swap 分区是将一部分磁盘空间用作内存使用,可以暂时解决内存不足的问题,Linux 系统默认会开启 swap 分区。

个人观点:swap 分区虽然能解决内存暂时不足的问题,但是与磁盘交互 IO 会影响应用程序的性能和稳定性,也不是长久之计。若考虑服务质量,服务提供商应该禁用 swap 分区。客户在内存资源不够时,可以临时申请更大的内存。

目前 K8s 版本是不支持 swap 的,经过漫长的讨论,最终 K8s 社区确实打算支持 swap,但还是实验版。

K8s 社区对开启 swap 功能的讨论: https://github.com/kubernetes/kubernetes/issues/53533

2.4 安装 containerd

wgethttps://github.com/containerd/containerd/releases/download/v1.5.7/containerd-1.5.7-linux-amd64.tar.gz
tarxvfcontainerd-1.5.7-linux-amd64.tar.gz-C/usr/local/

wget-O/etc/systemd/system/containerd.servicehttps://raw.githubusercontent.com/containerd/containerd/v1.5.7/containerd.service

mkdir/etc/containerd

systemctldaemon-reload
systemctlstartcontainerd
systemctlenablecontainerd

2.5 安装 CNI 插件

容器网络接口 (CNI) :大多数 pod 网络都需要。

cd
mkdir-p/etc/cni/net.d/opt/cni/bin
CNI_ARCH=amd64
CNI_VERSION=1.0.1
CNI_ARCHIVE=cni-plugins-linux-${CNI_ARCH}-v${CNI_VERSION}.tgz
wgethttps://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/${CNI_ARCHIVE}
tar-xzf$CNI_ARCHIVE-C/opt/cni/bin

译者注:

CNI 作为容器网络的统一标准,可以让各个容器管理平台(k8s、mesos等)都可以通过相同的接口调用各式各样的网络插件(flannel,calico,weave 等)来为容器配置网络。

2.6 安装 crictl

Kubelet 容器运行时接口 (CRI) :kubeadm、kubelet 需要的。

INSTALL_DIR=/usr/local/bin
mkdir-p$INSTALL_DIR
CRICTL_VERSION=\"v1.22.0\"
CRICTL_ARCHIVE=\"crictl-${CRICTL_VERSION}-linux-amd64.tar.gz\"
wget\"https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/${CRICTL_ARCHIVE}\"
tar-xzf$CRICTL_ARCHIVE-C$INSTALL_DIR
chown-Rhroot:root$INSTALL_DIR

更新/etc/crictl.yaml 文件内容:


cat>/etc/crictl.yaml<<'EOF'
runtime-endpoint:unix:///run/containerd/containerd.sock
image-endpoint:unix:///run/containerd/containerd.sock
timeout:10
#debug:true
pull-image-on-create:true
disable-pull-on-run:false
EOF

2.7 安装 runc

runc 是非 Akash 部署使用的默认 OCF 运行时(即标准的 kubernetes 容器,如 kube、etcd、calico pods)。

aptinstall-yrunc

译者注:

runc:是一个根据OCI标准创建并运行容器的 CLI 工具。简单地说,容器的创建、运行、销毁等操作最终都是通过调用 runc 完成的。

OFC:开放式计算设施。

2.8 (仅在 worker 节点上操作)安装 gVisor (runsc) 和 runc

gVisor (runsc) :是容器的应用程序内核,可在任何地方提供有效的纵深防御。 这里对比了几个容器运行时(Kata、gVisor、runc、rkt 等): https://gist.github.com/miguelmota/8082507590d55c400c5dc520a43e14a1

apt-yinstallsoftware-properties-common
curl-fsSLhttps://gvisor.dev/archive.key|apt-keyadd-
add-apt-repository\"deb[arch=amd64,arm64]https://storage.googleapis.com/gvisor/releasesreleasemain\"
aptupdate
aptinstall-yrunsc

2.9 配置 containerd 以使用 gVisor

现在 Kubernetes 将使用 containerd(稍后当我们使用 kubeadm 引导它时,您将看到这一点),我们需要将其配置为使用 gVisor 运行时。

删除 NoSchedule master 节点上的“runsc”(最后两行)。

更新/etc/containerd/config.toml :

cat>/etc/containerd/config.toml<<'EOF'
#versionMUSTpresent,otherwisecontainerdwon'tpicktherunsc!
version=2

#disabled_plugins=[\"cri\"]

[plugins.\"io.containerd.runtime.v1.linux\"]
shim_debug=true
[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc]
runtime_type=\"io.containerd.runc.v2\"
[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc]
runtime_type=\"io.containerd.runsc.v1\"
EOF

并重启 containerd 服务:

systemctlrestartcontainerd

gVisor (runsc) 还不能与 systemd-cgroup 或 cgroup v2 一起工作,如果您想跟进,这有两个未解决的问题: systemd-cgroup 支持 #193 在 runc 中支持 cgroup v2 #3481

三、安装 Kubernetes

译者注:

Q:什么是 Kubernetes?

A:Kubernetes,也称为 K8s,是一个自动部署、扩展和管理容器化应用的开源系统。

安装最新稳定版 kubeadm、kubelet、kubectl 并添加 kubelet systemd 服务

INSTALL_DIR=/usr/local/bin
RELEASE=\"$(curl-sSLhttps://dl.k8s.io/release/stable.txt)\"
cd$INSTALL_DIR
curl-L--remote-name-allhttps://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod+x{kubeadm,kubelet,kubectl}

RELEASE_VERSION=\"v0.9.0\"
curl-sSL\"https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service\"|sed\"s:/usr/bin:${INSTALL_DIR}:g\"|tee/etc/systemd/system/kubelet.service
mkdir-p/etc/systemd/system/kubelet.service.d
curl-sSL\"https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf\"|sed\"s:/usr/bin:${INSTALL_DIR}:g\"|tee/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

cd
systemctlenablekubelet

3.2 使用 kubeadm 部署 Kubernetes 集群

您可以根据需要随意调整 pod 子网和 service 子网以及其他控制平面配置。 更多内容请参阅:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/

确保将 kubernetesVersion 环境变量设置为您下载的二进制文件的版本:https://dl.k8s.io/release/stable.txt

你只需要在 1 个 master 节点上运行kubeadm init 命令! 稍后您将使用 kubeadm join 加入其他 master 节点(控制平面)和 worker 节点。

如果你打算扩展你的 master 节点,取消多 master 部署的 controlPlaneEndpoint 注释。 把 controlPlaneEndpoint 设置为和 --cluster-public-hostname 相同的值。该主机名应解析为 Kubernetes 集群的公共 IP。

专业提示:您可以多次注册相同的 DNS A 记录,指向多个 Akash master 节点。然后设置 controlPlaneEndpoint 为该 DNS A 记录,这样它就能做到均衡的 DNS 轮询!


cat>kubeadm-config.yaml<<EOF
kind:KubeletConfiguration
apiVersion:kubelet.config.k8s.io/v1beta1
cgroupDriver:cgroupfs
---
apiVersion:kubeadm.k8s.io/v1beta3
kind:InitConfiguration
nodeRegistration:
criSocket:unix:///run/containerd/containerd.sock#--cri-socket=unix:///run/containerd/containerd.sock
##kubeletExtraArgs:
##root-dir:/mnt/data/kubelet
imagePullPolicy:\"Always\"
localAPIEndpoint:
advertiseAddress:\"0.0.0.0\"
bindPort:6443
---
kind:ClusterConfiguration
apiVersion:kubeadm.k8s.io/v1beta3
kubernetesVersion:\"stable\"
#controlPlaneEndpoint:\"akash-master-lb.domainXYZ.com:6443\"
networking:
podSubnet:\"10.233.64.0/18\"#--pod-network-cidr,takenfromkubespray
serviceSubnet:\"10.233.0.0/18\"#--service-cidr,takenfromkubespray
EOF

在您的 master 节点(控制平面)上下载必要的依赖包,执行预检并预拉镜像:

apt-yinstallethtoolsocatconntrack
kubeadminitphasepreflight--configkubeadm-config.yaml
kubeadmconfigimagespull--configkubeadm-config.yaml

现在,如果您准备好初始化单节点配置(您只有一个 master 节点也将运行 Pod):

kubeadminit--configkubeadm-config.yaml

如果您计划运行多 master 部署,请确保添加 --upload-certs 到 kubeadm init 命令,如下所示:

kubeadminit--configkubeadm-config.yaml--upload-certs

您可以先运行 kubeadm init phase upload-certs --upload-certs --config kubeadm-config.yaml,然后运行 kubeadm token create --config kubeadm-config.yaml --print-join-command 来获取 kubeadm join 命令!

单节点 master 部署不需要运行 upload-certs 命令。

如果您将看到“Your Kubernetes control-plane has initialized successfully! ”,则一切顺利,现在您的 Kubernetes 控制平面节点就在服务中了!

您还将看到 kubeadm 输出了带有 --token 的 kubeadm join 命令。请妥善保管这条命令,因为如果您想根据您需要的架构类型来加入更多节点(worker 节点、数据节点),那么需要该命令。

通过多 master 节点部署,您将看到带有 --control-plane --certificate-key 额外参数的 kubeadm join 命令!确保在将更多 master 节点加入集群时使用它们!

3.3 检查您的节点

当你要使用 kubectl 命令与 Kubernetes 集群通信时,您可以设置 KUBECONFIG 变量,也可以创建指向 ~/.kube/config 的软链接。 确保您的 /etc/kubernetes/admin.conf 是安全的,因为它是您的 Kuberentes 管理密钥,可让您对 K8s 集群执行所有操作。

Akash Provider 服务将使用该配置,稍后您将看到如何使用。

(多 master 部署)新加入的 master 节点将自动从源 master 节点接收 admin.conf 文件。为您提供更多备份!

mkdir~/.kube
ln-sv/etc/kubernetes/admin.conf~/.kube/config
kubectlgetnodes-owide

3.4 安装 Calico 网络

缺少网络插件,Kubernetes 是无法使用的:

$kubectldescribenodeakash-single.domainXYZ.com|grep-wReady
ReadyFalseWed,28Jul202109:47:09+0000Wed,28Jul202109:46:52+0000KubeletNotReadycontainerruntimenetworknotready:NetworkReady=falsereason:NetworkPluginNotReadymessage:Networkpluginreturnserror:cnipluginnotinitialized

cd
curlhttps://docs.projectcalico.org/manifests/calico.yaml-O
kubectlapply-fcalico.yaml

译者注:

Q:什么是 Calico?

A:Calico 是一套开源的网络和网络安全方案,用于容器、虚拟机、宿主机之间的网络连接,可以用在kubernetes、OpenShift、DockerEE、OpenStrack等PaaS或IaaS平台上。

3.5 (可选)允许 master 节点调度 POD

默认情况下,出于安全原因,您的 K8s 集群不会在控制平面节点(master 节点)上调度 Pods。要么删除 master 节点上的 Taints(污点),以便您可以使用 kubectl taint nodes 命令在其上调度 pod,或者用 kubectl join 命令加入将运行 calico 的 worker 节点(但首先要确保执行了这些准备步骤:安装 CNI 插件、安装 crictl、配置 Kubernetes 以使用 gVisor)。

如果您正在运行单 master 部署,请删除 master 节点上的你 Taints:

$kubectldescribenodeakash-single.domainXYZ.com|grepTaints
Taints:node-role.kubernetes.io/master:NoSchedule

$kubectltaintnodes--allnode-role.kubernetes.io/master-

译者注:

Taint 是设置在节点上的,可以用来避免 pod 被分配到这些节点上,除非 pod 指定了 Toleration(容忍),并和 Taint 匹配。

示例1(新增):kubectl taint nodes nodeName key1=value1:NoSchedule。

示例2(删除):kubectl taint nodes nodeName key1-。

NoSchedule:一定不能被调度。

3.6 检查您的节点和 pods

$kubectlgetnodes-owide--show-labels
NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIMELABELS
akash-single.domainXYZ.comReadycontrol-plane,master4m24sv1.22.1149.202.82.160<none>Ubuntu20.04.2LTS5.4.0-80-genericcontainerd://1.4.8beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=akash-single.domainXYZ.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=

$kubectldescribenodeakash-single.domainXYZ.com|grep-wReady
ReadyTrueWed,28Jul202109:51:09+0000Wed,28Jul202109:51:09+0000KubeletReadykubeletispostingreadystatus.AppArmorenabled

$kubectlgetpods-A
NAMESPACENAMEREADYSTATUSRESTARTSAGE
kube-systemcalico-kube-controllers-78d6f96c7b-kkszw1/1Running03m33s
kube-systemcalico-node-ghgz81/1Running03m33s
kube-systemcoredns-558bd4d5db-2shqz1/1Running04m7s
kube-systemcoredns-558bd4d5db-t9r751/1Running04m7s
kube-systemetcd-akash-single.domainXYZ.com1/1Running04m26s
kube-systemkube-apiserver-akash-single.domainXYZ.com1/1Running04m24s
kube-systemkube-controller-manager-akash-single.domainXYZ.com1/1Running04m23s
kube-systemkube-proxy-72ntn1/1Running04m7s
kube-systemkube-scheduler-akash-single.domainXYZ.com1/1Running04m21s

3.7 (可选)安装 NodeLocal DNSCache

如果您使用带有此补丁的 akash 版本,则不必安装 NodeLocal DNSCache。

补丁:https://github.com/arno01/akash/commit/5c81676bb8ad9780571ff8e4f41e54565eea31fd

PR:https://github.com/ovrclk/akash/pull/1440 问题:https://github.com/ovrclk/akash/issues/1339#issuecomment-889293170

使用 NodeLocal DNSCache 以获得更好的性能,https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/

NodeLocal DNSCache 服务安装很简单:

kubedns=$(kubectlgetsvckube-dns-nkube-system-o'jsonpath={.spec.clusterIP}')
domain=\"cluster.local\"
localdns=\"169.254.25.10\"

wgethttps://raw.githubusercontent.com/kubernetes/kubernetes/v1.22.1/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

sed-i\"s/__PILLAR__LOCAL__DNS__/$localdns/g;s/__PILLAR__DNS__DOMAIN__/$domain/g;s/__PILLAR__DNS__SERVER__/$kubedns/g\"nodelocaldns.yaml

kubectlcreate-fnodelocaldns.yaml

修改 /var/lib/kubelet/config.yaml 文件中的 clusterDNS,使用 169.254.25.10(NodeLocal DNSCache) 而不是默认值 10.233.0.10 并重新启动 Kubelet 服务:

systemctlrestartkubelet

为了确保您使用的是 NodeLocal DNSCache,您可以创建一个 POD 并检查内部域名服务器应该是 169.254.25.10:

$cat/etc/resolv.conf
searchdefault.svc.cluster.localsvc.cluster.localcluster.local
nameserver169.254.25.10
optionsndots:5

3.8 (可选)IPVS 模式

注意:由于“akash-deployment-restrictions”网络策略中这行代码https://github.com/ovrclk/akash/blob/7c39ea403/provider/cluster/kube/builder.go#L599,导致跨服务通信(容器 X 到同一 POD 内的服务 Y)在 IPVS 模式下不起作用。不过,可能还有另一种方法可以使其工作,您可以尝试启用 kube_proxy_mode 切换的 kubespray 部署,看看它是否能够以这种方式工作。

https://www.linkedin.com/pulse/iptables-vs-ipvs-kubernetes-vivek-grover/ https://forum.akash.network/t/akash-provider-support-ipvs-kube-proxy-mode/720

如果你有一天想在 IPVS 模式下运行 kube-proxy(而不是默认的 IPTABLES 模式),你需要重复上面“安装 NodeLocal DNSCache”部分的步骤,除了 nodelocaldns.yaml 使用以下命令修改文件:

sed-i\"s/__PILLAR__LOCAL__DNS__/$localdns/g;s/__PILLAR__DNS__DOMAIN__/$domain/g;s/,__PILLAR__DNS__SERVER__//g;s/__PILLAR__CLUSTER__DNS__/$kubedns/g\"nodelocaldns.yaml

通过设置 mode: 为 ipvs,将 kube-proxy 切换到 IPVS 模式:

kubectleditconfigmapkube-proxy-nkube-system

并重启 kube-proxy:

kubectl-nkube-systemdeletepod-lk8s-app=kube-proxy

译者注:

kube-proxy 有三种代理模式:userspace、iptables、ipvs。

IPVS(IP Virtual Server)是构建于Netfilter之上,作为Linux内核的一部分提供传输层负载均衡的模块。

3.9 配置 Kubernetes 以使用 gVisor

设置 gvisor (runsc) Kubernetes RuntimeClass。 Akash Provider 创建的部署将默认使用它。

cat<<'EOF'|kubectlapply-f-
apiVersion:node.k8s.io/v1
kind:RuntimeClass
metadata:
name:gvisor
handler:runsc
EOF

3.10 检查您的 gVisor 和 K8s DNS 是否如期工作

cat>dnstest.yaml<<'EOF'
apiVersion:v1
kind:Pod
metadata:
name:dnsutils
namespace:default
spec:
runtimeClassName:gvisor
containers:
-name:dnsutils
image:gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
-sleep
-\"3600\"
imagePullPolicy:IfNotPresent
restartPolicy:Always
EOF

$kubectlapply-fdnstest.yaml

$kubectlexec-i-tdnsutils--sh
/#dmesg
[0.000000]StartinggVisor...
[0.459332]Reticulatingsplines...
[0.868906]Synthesizingsystemcalls...
[1.330219]AdversariallytrainingRedcodeAI...
[1.465972]Waitingforchildren...
[1.887919]Generatingrandomnumbersbyfairdiceroll...
[2.302806]Acceleratingteletypewriterto9600baud...
[2.729885]Checkingnaughtyandniceprocesslist...
[2.999002]Grantinglicencetokill(2)...
[3.116179]Checkingnaughtyandniceprocesslist...
[3.451080]Creatingprocessschedule...
[3.658232]Ready!
/#ipa
1:lo:<LOOPBACK,UP,LOWER_UP>mtu65536
link/loopback00:00:00:00:00:00brd00:00:00:00:00:00
inet127.0.0.1/8scopeglobaldynamic
inet6::1/128scopeglobaldynamic
2:eth0:<UP,LOWER_UP>mtu1480
link/ether9e:f1:a0:ee:8a:55brdff:ff:ff:ff:ff:ff
inet10.233.85.133/32scopeglobaldynamic
inet6fe80::9cf1:a0ff:feee:8a55/64scopeglobaldynamic
/#ipr
127.0.0.0/8devlo
::1devlo
169.254.1.1deveth0
fe80::/64deveth0
defaultvia169.254.1.1deveth0
/#netstat-nr
KernelIProutingtable
DestinationGatewayGenmaskFlagsMSSWindowirttIface
169.254.1.10.0.0.0255.255.255.255U000eth0
0.0.0.0169.254.1.10.0.0.0UG000eth0
/#cat/etc/resolv.conf
searchdefault.svc.cluster.localsvc.cluster.localcluster.local
nameserver10.233.0.10
optionsndots:5
/#ping8.8.8.8
PING8.8.8.8(8.8.8.8):56databytes
64bytesfrom8.8.8.8:seq=0ttl=42time=5.671ms
^C
---8.8.8.8pingstatistics---
1packetstransmitted,1packetsreceived,0%packetloss
round-tripmin/avg/max=5.671/5.671/5.671ms
/#pinggoogle.com
PINGgoogle.com(172.217.13.174):56databytes
64bytesfrom172.217.13.174:seq=0ttl=42time=85.075ms
^C
---google.compingstatistics---
1packetstransmitted,1packetsreceived,0%packetloss
round-tripmin/avg/max=85.075/85.075/85.075ms
/#nslookupkubernetes.default.svc.cluster.local
Server:10.233.0.10
Address:10.233.0.10#53

Name:kubernetes.default.svc.cluster.local
Address:10.233.0.1

/#exit

$kubectldelete-fdnstest.yaml

如果您看到“Starting gVisor…”,则表示 Kubernetes 能够使用 gVisor (runsc) 运行容器。

如果您使用 NodeLocal DNSCache,您将看到 169.254.25.10 而不是 10.233.0.10 的域名服务器。

当你应用 network-policy-default-ns-deny.yaml 后,网络测试将不起作用(即 ping 8.8.8.8 将失败),这是意料之中的。

3.11 (可选)加密 etcd

etcd 是一个一致性且高可用的键值存储,用作 Kubernetes 对所有集群数据的后备存储。 Kubernetes 使用 etcd 来存储它的所有数据——它的配置数据、状态和元数据。Kubernetes 是一个分布式系统,所以它需要一个像 etcd 这样的分布式数据存储。etcd 允许 Kubernetes 集群中的任何节点读写数据。

??与没有加密相比,在 EncryptionConfig 中存储原始加密密钥只能适度改善您的安全状况。请使用kms provider以提高安全性。

$mkdir/etc/kubernetes/encrypt

ENCRYPTION_KEY=$(head-c32/dev/urandom|base64)
cat>/etc/kubernetes/encrypt/config.yaml<<EOF
apiVersion:apiserver.config.k8s.io/v1
kind:EncryptionConfiguration
resources:
-resources:
-secrets
providers:
-aescbc:
keys:
-name:key1
secret:${ENCRYPTION_KEY}
-identity:{}
EOF

按以下方式更新您的 /etc/kubernetes/manifests/kube-apiserver.yaml,以便 kube-apiserver 知道从哪里读取密钥:

$vim/etc/kubernetes/manifests/kube-apiserver.yaml
$diff-Nurkube-apiserver.yaml.orig/etc/kubernetes/manifests/kube-apiserver.yaml
---kube-apiserver.yaml.orig2021-07-2810:05:38.198391788+0000
+++/etc/kubernetes/manifests/kube-apiserver.yaml2021-07-2810:13:51.975308872+0000
@@-41,6+41,7@@
---service-cluster-ip-range=10.233.0.0/18
---tls-cert-file=/etc/kubernetes/pki/apiserver.crt
---tls-private-key-file=/etc/kubernetes/pki/apiserver.key
+---encryption-provider-config=/etc/kubernetes/encrypt/config.yaml
image:k8s.gcr.io/kube-apiserver:v1.22.1
imagePullPolicy:IfNotPresent
livenessProbe:
@@-95,6+96,9@@
-mountPath:/usr/share/ca-certificates
name:usr-share-ca-certificates
readOnly:true
+-mountPath:/etc/kubernetes/encrypt
+name:k8s-encrypt
+readOnly:true
hostNetwork:true
priorityClassName:system-node-critical
volumes:
@@-122,4+126,8@@
path:/usr/share/ca-certificates
type:DirectoryOrCreate
name:usr-share-ca-certificates
+-hostPath:
+path:/etc/kubernetes/encrypt
+type:DirectoryOrCreate
+name:k8s-encrypt
status:{}

当你保存这个文件时 /etc/kubernetes/manifests/kube-apiserver.yaml,kube-apiserver 会自动重启。(这可能需要一两分钟,请耐心等待。)

$crictlps|grepapiserver
10e6f4b409a4b106ff58d4308236secondsagoRunningkube-apiserver0754932bb659c5

不要忘记在所有 Kubernetes 节点上执行相同的操作!

使用您刚刚添加的密钥加密所有的信息:

kubectlgetsecrets-A-ojson|kubectlreplace-f-

3.12 (可选)IPv6 支持

如果您希望在 Kubernetes 集群中启用 IPv6 支持,请参阅这个地址:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/dual-stack-support/

四、为 Akash Provider 服务配置 Kubernetes

如果您要将 Akash 提供程序从 0.12 更新到 0.14,请确保按照以下步骤:https://github.com/ovrclk/akash/blob/9e1a7aa5ccc894e89d84d38485b458627a287bae/script/provider_migrate_to_hostname_operator.md

mkdirakash-provider
cdakash-provider

wgethttps://raw.githubusercontent.com/ovrclk/akash/mainnet/main/pkg/apis/akash.network/v1/crd.yaml
kubectlapply-f./crd.yaml

wgethttps://raw.githubusercontent.com/ovrclk/akash/mainnet/main/pkg/apis/akash.network/v1/provider_hosts_crd.yaml
kubectlapply-f./provider_hosts_crd.yaml

wgethttps://raw.githubusercontent.com/ovrclk/akash/mainnet/main/_docs/kustomize/networking/network-policy-default-ns-deny.yaml
kubectlapply-f./network-policy-default-ns-deny.yaml

wgethttps://raw.githubusercontent.com/ovrclk/akash/mainnet/main/_run/ingress-nginx-class.yaml
kubectlapply-f./ingress-nginx-class.yaml

wgethttps://raw.githubusercontent.com/ovrclk/akash/mainnet/main/_run/ingress-nginx.yaml
kubectlapply-f./ingress-nginx.yaml

#NOTE:inthisexampletheKubernetesnodeiscalled\"akash-single.domainXYZ.com\"andit'sgoingtobetheingressnodetoo.
#Intheperfectenvironmentthatwouldnotbethemaster(control-plane)node,butrathertheworkernodes!

kubectllabelnodesakash-single.domainXYZ.comakash.network/role=ingress

#Checkthelabelgotapplied:

kubectlgetnodes-owide--show-labels
NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIMELABELS
akash-single.domainXYZ.comReadycontrol-plane,master10mv1.22.1149.202.82.160<none>Ubuntu20.04.2LTS5.4.0-80-genericcontainerd://1.4.8akash.network/role=ingress,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=akash-single.domainXYZ.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=

gitclone--depth1-bmainnet/mainhttps://github.com/ovrclk/akash.git
cdakash
kubectlapply-f_docs/kustomize/networking/namespace.yaml
kubectlkustomize_docs/kustomize/akash-services/|kubectlapply-f-

cat>>_docs/kustomize/akash-hostname-operator/kustomization.yaml<<'EOF'
images:
-name:ghcr.io/ovrclk/akash:stable
newName:ghcr.io/ovrclk/akash
newTag:0.14.0
EOF

kubectlkustomize_docs/kustomize/akash-hostname-operator|kubectlapply-f-

4.1 获取 DNS 泛解析

在我的案例中,我将使用 <anything>.ingress.nixaid.com,它将解析为我的 Kubernetes 节点的 IP 的位置。最好只有 worker 节点!

A*.ingress.nixaid.comresolvesto149.202.82.160

并且 akash-provider.nixaid.com 将解析到我将要运行的 Akash Provider 服务的 IP。(Akash Provider 服务正在监听 8443/tcp 端口)

专业提示:您可以多次注册相同的 DNS 泛 A 记录,指向多个 Akash 工作节点,这样它将获得均衡的 DNS 轮询!

五、在 Akash 区块链上创建 Akash Provider

现在我们已经配置、启动并运行了我们的 Kubernetes,是时候让 Akash Provider 运行了。

注意:您不必直接在 Kubernetes 集群上运行 Akash Provider 服务。你可以在任何地方运行它。它只需要能够通过互联网访问您的 Kubernetes 集群。

5.1 创建 Akash 用户

我们将在 akash 用户下运行 akash provider。

useraddakash-m-U-s/usr/sbin/nologin
mkdir/home/akash/.kube
cp/etc/kubernetes/admin.conf/home/akash/.kube/config
chown-Rhakash:akash/home/akash/.kube

5.2 安装 Akash 客户端

su-s/bin/bash-akash

wgethttps://github.com/ovrclk/akash/releases/download/v0.14.0/akash_0.14.0_linux_amd64.zip
unzipakash_0.14.0_linux_amd64.zip
mv/home/akash/akash_0.14.0_linux_amd64/akash/usr/bin/
chownroot:root/usr/bin/akash

5.3 配置 Akash 客户端

su-s/bin/bash-akash

mkdir~/.akash

exportKUBECONFIG=/home/akash/.kube/config
exportPROVIDER_ADDRESS=akash-provider.nixaid.com
exportAKASH_NET=\"https://raw.githubusercontent.com/ovrclk/net/master/mainnet\"
exportAKASH_NODE=\"$(curl-s\"$AKASH_NET/rpc-nodes.txt\"|grep-Ev'forbole|skynetvalidators|162.55.94.246'|shuf-n1)\"
exportAKASH_CHAIN_ID=\"$(curl-s\"$AKASH_NET/chain-id.txt\")\"
exportAKASH_KEYRING_BACKEND=file
exportAKASH_PROVIDER_KEY=default
exportAKASH_FROM=$AKASH_PROVIDER_KEY

检查变量:

$set|grep^AKASH
AKASH_CHAIN_ID=akashnet-2
AKASH_FROM=default
AKASH_KEYRING_BACKEND=file
AKASH_NET=https://raw.githubusercontent.com/ovrclk/net/master/mainnet
AKASH_NODE=http://135.181.181.120:28957
AKASH_PROVIDER_KEY=default

现在创建 default 密钥:

$akashkeysadd$AKASH_PROVIDER_KEY--keyring-backend=$AKASH_KEYRING_BACKEND

Enterkeyringpassphrase:
Re-enterkeyringpassphrase:

-name:default
type:local
address:akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
...

确保将助记词种子保存在安全的地方,因为这是恢复帐户和资金的唯一方法!

如果您想从助记词种子中恢复您的密钥,请在 akash keys add ... 命令后添加 --recover 参数。

5.4 配置 Akash provider

$catprovider.yaml
host:https://akash-provider.nixaid.com:8443
attributes:
-key:region
value:europe##changethistoyourregion!
-key:host
value:akash##feelfreetochangethistowhateveryoulike
-key:organization#optional
value:whatever-your-Org-is##changethistoyourorg.
-key:tier#optional
value:community

5.5 为您的 Akash 提供商的钱包提供资金

您将需要大约 10 AKT(Akash Token) 才能开始使用。

您的钱包必须有足够的资金,因为在区块链上竞标订单需要 5 AKT 存款。该押金在中标/中标后全额退还。

在此处提到的交易所之一购买 AKT:https://akash.network/token/

查询钱包余额:

#Puthereyouraddresswhichyou'vegotwhencreatedonewith\"akashkeysadd\"command.
exportAKASH_ACCOUNT_ADDRESS=akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0

$akash\
--node\"$AKASH_NODE\"\
querybankbalances\"$AKASH_ACCOUNT_ADDRESS\"

换算:1 akt = 1000000 uakt (akt*10^6)

5.6 在 Akash 网络上注册您的 provider

$akashtxprovidercreateprovider.yaml\
--from$AKASH_PROVIDER_KEY\
--keyring-backend=$AKASH_KEYRING_BACKEND\
--node=$AKASH_NODE\
--chain-id=$AKASH_CHAIN_ID\
--gas-prices=\"0.025uakt\"\
--gas=\"****\"\
--gas-adjustment=1.15

如果你想更改 provider.yaml,请在更新后再执行带有如上参数的 akash tx provider update 命令。

在 Akash 网络上注册您的 provider 后,就能看到你的主机:

$akash\
--node\"$AKASH_NODE\"\
queryproviderlist-ojson|jq-r'.providers[]|[.attributes[].value,.host_uri,.owner]|@csv'|sort-d
\"australia-east-akash-provider\",\"https://provider.akashprovider.com\",\"akash1ykxzzu332txz8zsfew7z77wgsdyde75wgugntn\"
\"equinix-metal-ams1\",\"akash\",\"mn2-0\",\"https://provider.ams1p0.mainnet.akashian.io:8443\",\"akash1ccktptfkvdc67msasmesuy5m7gpc76z75kukpz\"
\"equinix-metal-ewr1\",\"akash\",\"mn2-0\",\"https://provider.ewr1p0.mainnet.akashian.io:8443\",\"akash1f6gmtjpx4r8qda9nxjwq26fp5mcjyqmaq5m6j7\"
\"equinix-metal-sjc1\",\"akash\",\"mn2-0\",\"https://provider.sjc1p0.mainnet.akashian.io:8443\",\"akash10cl5rm0cqnpj45knzakpa4cnvn5amzwp4lhcal\"
\"equinix-metal-sjc1\",\"akash\",\"mn2-0\",\"https://provider.sjc1p1.mainnet.akashian.io:8443\",\"akash1cvpefa7pw8qy0u4euv497r66mvgyrg30zv0wu0\"
\"europe\",\"nixaid\",\"https://akash-provider.nixaid.com:8443\",\"akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0\"
\"us-west-demo-akhil\",\"dehazelabs\",\"https://73.157.111.139:8443\",\"akash1rt2qk45a75tjxzathkuuz6sq90jthekehnz45z\"
\"us-west-demo-caleb\",\"https://provider.akashian.io\",\"akash1rdyul52yc42vd8vhguc0t9ryug9ftf2zut8jxa\"
\"us-west-demo-daniel\",\"https://daniel1q84.iptime.org\",\"akash14jpkk4n5daspcjdzsrylgw38lj9xug2nznqnu2\"
\"us-west\",\"https://ssymbiotik.ekipi365.com\",\"akash1j862g3efcw5xcvn0402uwygrwlzfg5r02w9jw5\"

5.7 创建 provider 证书

您必须向区块链发起一个交易,来创建和您的 provider 关联的证书:

akashtxcertcreateserver$PROVIDER_ADDRESS\
--chain-id$AKASH_CHAIN_ID\
--keyring-backend$AKASH_KEYRING_BACKEND\
--from$AKASH_PROVIDER_KEY\
--node=$AKASH_NODE\
--gas-prices=\"0.025uakt\"--gas=\"****\"--gas-adjustment=1.15

六、启动 Akash Provider

Akash 提供程序将需要 Kubernetes 管理员配置。之前我们已经把它移到了 /home/akash/.kube/config。

创建 start-provider.sh 文件来启动 Akash Provider。 但在此之前,请使用您在创建 provider 密钥时设置的密码来创建 key-pass.txt 文件。

echo\"Your-passWoRd\"|tee/home/akash/key-pass.txt

确保将 --cluster-public-hostname 设置为解析到 Kubernetes 集群公共 IP 的主机名。正如您将看到的那样,您还要将设置 controlPlaneEndpoint 为该主机名。

cat>/home/akash/start-provider.sh<<'EOF'
#!/usr/bin/envbash

exportAKASH_NET=\"https://raw.githubusercontent.com/ovrclk/net/master/mainnet\"
exportAKASH_NODE=\"$(curl-s\"$AKASH_NET/rpc-nodes.txt\"|grep-Ev'forbole|skynetvalidators|162.55.94.246'|shuf-n1)\"

cd/home/akash
(sleep2s;catkey-pass.txt;catkey-pass.txt)|\
/home/akash/bin/akashproviderrun\
--chain-idakashnet-2\
--node$AKASH_NODE\
--keyring-backend=file\
--fromdefault\
--fees1000uakt\
--kubeconfig/home/akash/.kube/config\
--cluster-k8strue\
--deployment-ingress-domainingress.nixaid.com\
--deployment-ingress-static-hoststrue\
--bid-price-strategyscale\
--bid-price-cpu-scale0.0011\
--bid-price-memory-scale0.0002\
--bid-price-storage-scale0.0011\
--bid-price-endpoint-scale0\
--bid-deposit5000000uakt\
--balance-check-period24h\
--minimum-balance5000000\
--cluster-node-port-quantity1000\
--cluster-public-hostnameakash-master-lb.domainXYZ.com\
--bid-timeout10m0s\
--log_levelwarn

确保它是可执行的:

chmod+x/home/akash/start-provider.sh

创建 akash-provider.servicesystemd 服务,以便 Akash provider 能自启动:

cat>/etc/systemd/system/akash-provider.service<<'EOF'
[Unit]
Description=AkashProvider
After=network.target

[Service]
User=akash
Group=akash
ExecStart=/home/akash/start-provider.sh
KillSignal=SIGINT
Restart=on-failure
RestartSec=15
StartLimitInterval=200
StartLimitBurst=10
#LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 Akash provider:

systemctldaemon-reload
systemctlstartakash-provider
systemctlenableakash-provider

检查日志:

journalctl-uakash-provider--since'5minago'-f

Akash 节点检测如下:

D[2021-06-29|11:33:34.190]noderesourcesmodule=provider-clustercmp=servicecmp=inventory-servicenode-id=akash-single.domainXYZ.comavailable-cpu=\"units:<val:\\"7050\\">attributes:<key:\\"arch\\"value:\\"amd64\\">\"available-memory=\"quantity:<val:\\"32896909312\\">\"available-storage=\"quantity:<val:\\"47409223602\\">\"

CPU:7050 / 1000= 7 CPU(服务器实际上有8个CPU,它必须为提供者节点运行的任何东西保留1个CPU,这是一件聪明的事情)

可用内存:32896909312 /(1024^3)= 30.63Gi(服务器有 32Gi RAM)

可用存储空间:47409223602 / (1024^3) = 44.15Gi(这里有点奇怪,我在 rootfs 根目录上只有 32Gi 可用)

七、在我们自己的 Akash provider 上部署

为了在您的客户端配置 Akash 客户端,请参考https://nixaid.com/solana-on-akashnet/中的前 4 个步骤或https://docs.akash.network/guides/deploy

现在我们已经运行了自己的 Akash Provider,让我们尝试在其上部署一些东西。 我将部署 echoserver 服务,一旦通过 HTTP/HTTPS 端口查询,该服务就可以向客户端返回有趣的信息。

$catechoserver.yml
---
version:\"2.0\"

services:
echoserver:
image:gcr.io/google_containers/echoserver:1.10
expose:
-port:8080
as:80
to:
-global:true
#accept:
#-my.host123.com

profiles:
compute:
echoserver:
resources:
cpu:
units:0.1
memory:
size:128Mi
storage:
size:128Mi
placement:
akash:
#attributes:
#host:nixaid
#signedBy:
#anyOf:
#-\"akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63\"##AKASH
pricing:
echoserver:
denom:uakt
amount:100

deployment:
echoserver:
akash:
profile:echoserver
count:1

请注意,我已经注释了 signedBy 指令,这个指令用来确保他们部署在受信任的 provider 上。注释掉之后,意味着您可以部署在您想要的任何 Akash provider 上,而不必签名。

如果您想使用 signedBy 指令,您可以使用 akash tx audit attr create 命令在您的 Akash Provider 上签署属性。

akashtxdeploymentcreateechoserver.yml\
--fromdefault\
--node$AKASH_NODE\
--chain-id$AKASH_CHAIN_ID\
--gas-prices=\"0.025uakt\"--gas=\"****\"--gas-adjustment=1.15

现在已经将部署发布到了 Akash 网络,让我们看看我们的 Akash Provider。

从 Akash provider 的角度来看,成功的预订是这样的:

Reservation fulfilled信息是我们要找的。

Jun3000:00:46akash1start-provider.sh[1029866]:I[2021-06-30|00:00:46.122]syncingsequencecmp=client/broadcasterlocal=31remote=31
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.837]orderdetectedmodule=bidengine-serviceorder=order/akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.867]groupfetchedmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.867]requestingreservationmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:00:53akash1start-provider.sh[1029866]:D[2021-06-30|00:00:53.868]reservationrequestedmodule=provider-clustercmp=servicecmp=inventory-serviceorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1resources=\"group_id:<owner:\\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\\"dseq:1585829gseq:1>state:opengroup_spec:<name:\\"akash\\"requirements:<signed_by:<>>resources:<resources:<cpu:<units:<val:\\"100\\">>memory:<quantity:<val:\\"134217728\\">>storage:<quantity:<val:\\"134217728\\">>endpoints:<>>count:1price:<denom:\\"uakt\\"amount:\\"2000\\">>>created_at:1585832\"
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.868]Reservationfulfilledmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:00:53akash1start-provider.sh[1029866]:D[2021-06-30|00:00:53.868]submittingfulfillmentmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1price=357uakt
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.932]broadcastresponsecmp=client/broadcasterresponse=\"Response:\nTxHash:BDE0FE6CD12DB3B137482A0E93D4099D7C9F6A5ABAC597E17F6E94706B84CC9A\nRawLog:[]\nLogs:[]\"err=null
Jun3000:00:53akash1start-provider.sh[1029866]:I[2021-06-30|00:00:53.932]bidcompletemodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:00:56akash1start-provider.sh[1029866]:I[2021-06-30|00:00:56.121]syncingsequencecmp=client/broadcasterlocal=32remote=31

现在 Akash provider 的预订已完成,我们可以将其视为客户端的出价:

$akashquerymarketbidlist--owner=$AKASH_ACCOUNT_ADDRESS--node$AKASH_NODE--dseq$AKASH_DSEQ
...
-bid:
bid_id:
dseq:\"1585829\"
gseq:1
oseq:1
owner:akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h
provider:akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
created_at:\"1585836\"
price:
amount:\"357\"
denom:uakt
state:open
escrow_account:
balance:
amount:\"50000000\"
denom:uakt
id:
scope:bid
xid:akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
owner:akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
settled_at:\"1585836\"
state:open
transferred:
amount:\"0\"
denom:uakt
...

现在让我们创建租约(接受 Akash Provider 提供的出价):

akashtxmarketleasecreate\
--chain-id$AKASH_CHAIN_ID\
--node$AKASH_NODE\
--owner$AKASH_ACCOUNT_ADDRESS\
--dseq$AKASH_DSEQ\
--gseq$AKASH_GSEQ\
--oseq$AKASH_OSEQ\
--provider$AKASH_PROVIDER\
--fromdefault\
--gas-prices=\"0.025uakt\"--gas=\"****\"--gas-adjustment=1.15

现在我们可以在提供商的站点上看到“ lease won ”:

Jun3000:03:42akash1start-provider.sh[1029866]:D[2021-06-30|00:03:42.479]ignoringgroupmodule=bidengine-orderorder=akash15yd3qszmqausvzpj7n0y0e4pft2cu9rt5gccda/1346631/1/1group=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1
Jun3000:03:42akash1start-provider.sh[1029866]:I[2021-06-30|00:03:42.479]leasewonmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun3000:03:42akash1start-provider.sh[1029866]:I[2021-06-30|00:03:42.480]shuttingdownmodule=bidengine-orderorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:03:42akash1start-provider.sh[1029866]:I[2021-06-30|00:03:42.480]leasewonmodule=provider-manifestlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun3000:03:42akash1start-provider.sh[1029866]:I[2021-06-30|00:03:42.480]newleasemodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829lease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun3000:03:42akash1start-provider.sh[1029866]:D[2021-06-30|00:03:42.480]emitreceivedeventsskippedmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829data=<nil>leases=1manifests=0
Jun3000:03:42akash1start-provider.sh[1029866]:I[2021-06-30|00:03:42.520]datareceivedmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829version=77fd690d5e5ec8c320a902da09a59b48dc9abd0259d84f9789fee371941320e7
Jun3000:03:42akash1start-provider.sh[1029866]:D[2021-06-30|00:03:42.520]emitreceivedeventsskippedmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829data=\"deployment:<deployment_id:<owner:\\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\\"dseq:1585829>state:activeversion:\\"w\\375i\\r^^\\310\\303\\251\\002\\332\\t\\245\\233H\\334\\232\\275\\002Y\\330O\\227\\211\\376\\343q\\224\\023\\347\\"created_at:1585832>groups:<group_id:<owner:\\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\\"dseq:1585829gseq:1>state:opengroup_spec:<name:\\"akash\\"requirements:<signed_by:<>>resources:<resources:<cpu:<units:<val:\\"100\\">>memory:<quantity:<val:\\"134217728\\">>storage:<quantity:<val:\\"134217728\\">>endpoints:<>>count:1price:<denom:\\"uakt\\"amount:\\"2000\\">>>created_at:1585832>escrow_account:<id:<scope:\\"deployment\\"xid:\\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829\\">owner:\\"akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h\\"state:openbalance:<denom:\\"uakt\\"amount:\\"5000000\\">transferred:<denom:\\"uakt\\"amount:\\"0\\">settled_at:1585859>\"leases=1manifests=0

发送清单以最终在您的 Akash Provider 上部署 echoserver 服务!

akashprovidersend-manifestechoserver.yml\
--node$AKASH_NODE\
--dseq$AKASH_DSEQ\
--provider$AKASH_PROVIDER\
--fromdefault

Provider 已经获得了部署的清单 => “ manifest received ”,并且 kube-builder 模块在命名空间 c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76 下已经“ created service ” :

Jun3000:06:16akash1start-provider.sh[1029866]:I[2021-06-30|00:06:16.122]syncingsequencecmp=client/broadcasterlocal=32remote=32
Jun3000:06:21akash1start-provider.sh[1029866]:D[2021-06-30|00:06:21.413]inventoryfetchedmodule=provider-clustercmp=servicecmp=inventory-servicenodes=1
Jun3000:06:21akash1start-provider.sh[1029866]:D[2021-06-30|00:06:21.413]noderesourcesmodule=provider-clustercmp=servicecmp=inventory-servicenode-id=akash-single.domainXYZ.comavailable-cpu=\"units:<val:\\"7050\\">attributes:<key:\\"arch\\"value:\\"amd64\\">\"available-memory=\"quantity:<val:\\"32896909312\\">\"available-storage=\"quantity:<val:\\"47409223602\\">\"
Jun3000:06:26akash1start-provider.sh[1029866]:I[2021-06-30|00:06:26.122]syncingsequencecmp=client/broadcasterlocal=32remote=32
Jun3000:06:35akash1start-provider.sh[1029866]:I[2021-06-30|00:06:35.852]manifestreceivedmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun3000:06:35akash1start-provider.sh[1029866]:D[2021-06-30|00:06:35.852]requestsvalidmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829num-requests=1
Jun3000:06:35akash1start-provider.sh[1029866]:D[2021-06-30|00:06:35.853]publishingmanifestreceivedmodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829num-leases=1
Jun3000:06:35akash1start-provider.sh[1029866]:D[2021-06-30|00:06:35.853]publishingmanifestreceivedforleasemodule=manifest-managerdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829lease_id=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun3000:06:35akash1start-provider.sh[1029866]:I[2021-06-30|00:06:35.853]manifestreceivedmodule=provider-clustercmp=service
Jun3000:06:36akash1start-provider.sh[1029866]:D[2021-06-30|00:06:36.023]provider/cluster/kube/builder:createdservicemodule=kube-builderservice=\"&Service{ObjectMeta:{echoserver00001-01-0100:00:00+0000UTC<nil><nil>map[akash.network:trueakash.network/manifest-service:echoserverakash.network/namespace:c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76]map[][][][]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:0-80,Protocol:TCP,Port:80,TargetPort:{08080},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{akash.network:true,akash.network/manifest-service:echoserver,akash.network/namespace:c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76,},ClusterIP:,Type:ClusterIP,ExternalIPs:[],SessionAffinity:,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}\"
Jun3000:06:36akash1start-provider.sh[1029866]:I[2021-06-30|00:06:36.121]syncingsequencecmp=client/broadcasterlocal=32remote=32
Jun3000:06:36akash1start-provider.sh[1029866]:D[2021-06-30|00:06:36.157]provider/cluster/kube/builder:createdrulesmodule=kube-builderrules=\"[{Host:623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.comIngressRuleValue:{HTTP:&HTTPIngressRuleValue{Paths:[]HTTPIngressPath{HTTPIngressPath{Path:/,Backend:IngressBackend{Resource:nil,Service:&IngressServiceBackend{Name:echoserver,Port:ServiceBackendPort{Name:,Number:80,},},},PathType:*Prefix,},},}}}]\"
Jun3000:06:36akash1start-provider.sh[1029866]:D[2021-06-30|00:06:36.222]deploycompletemodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash

让我们从客户端看租约状态:

akashproviderlease-status\
--node$AKASH_NODE\
--dseq$AKASH_DSEQ\
--provider$AKASH_PROVIDER\
--fromdefault

{
\"services\":{
\"echoserver\":{
\"name\":\"echoserver\",
\"available\":1,
\"total\":1,
\"uris\":[
\"623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com\"
],
\"observed_generation\":1,
\"replicas\":1,
\"updated_replicas\":1,
\"ready_replicas\":1,
\"available_replicas\":1
}
},
\"forwarded_ports\":{}
}

我们搞定了! 我们查询一下:

$curl623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com


Hostname:echoserver-5c6f84887-6kh9p

PodInformation:
-nopodinformationavailable-

Servervalues:
server_version=nginx:1.13.3-lua:10008

RequestInformation:
client_address=10.233.85.136
method=GET
realpath=/
query=
request_version=1.1
request_scheme=http
request_uri=http://623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com:8080/

RequestHeaders:
accept=*/*
host=623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com
user-agent=curl/7.68.0
x-forwarded-for=CLIENT_IP_REDACTED
x-forwarded-host=623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=CLIENT_IP_REDACTED
x-request-id=8cdbcd7d0c4f42440669f7396e206cae
x-scheme=http

RequestBody:
-nobodyinrequest-

我们在自己的 Akash provider 上的部署正如期工作!

让我们从 Kubernetes 的角度看看,我们在 Akash Provider 的部署实际上是怎样的?

$kubectlgetall-A-lakash.network=true
NAMESPACENAMEREADYSTATUSRESTARTSAGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76pod/echoserver-5c6f84887-6kh9p1/1Running02m37s

NAMESPACENAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76service/echoserverClusterIP10.233.47.15<none>80/TCP2m37s

NAMESPACENAMEREADYUP-TO-DATEAVAILABLEAGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76deployment.apps/echoserver1/1112m38s

NAMESPACENAMEDESIREDCURRENTREADYAGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76replicaset.apps/echoserver-5c6f848871112m37s

$kubectlgeting-A
NAMESPACENAMECLASSHOSTSADDRESSPORTSAGE
c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76echoserver<none>623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.comlocalhost808m47s

$kubectl-nc9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76describeingechoserver
Name:echoserver
Namespace:c9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd76
Address:localhost
Defaultbackend:default-http-backend:80(<error:endpoints\"default-http-backend\"notfound>)
Rules:
HostPathBackends
----------------
623n1u4k2hbiv6f1kuiscparqk.ingress.nixaid.com
/echoserver:80(10.233.85.137:8080)
Annotations:<none>
Events:
TypeReasonAgeFromMessage
-------------------------
NormalSync8m9s(x2over9m5s)nginx-ingress-controllerScheduledforsync


$crictlpods
PODIDCREATEDSTATENAMENAMESPACEATTEMPTRUNTIME
4c22dba05a2c05minutesagoReadyechoserver-5c6f84887-6kh9pc9mdnf8o961odir96rdcflt9id95rq2a2qesidpjuqd760runsc
...

客户端也可以读取他的部署日志:

akash\
--node\"$AKASH_NODE\"\
providerlease-logs\
--dseq\"$AKASH_DSEQ\"\
--gseq\"$AKASH_GSEQ\"\
--oseq\"$AKASH_OSEQ\"\
--provider\"$AKASH_PROVIDER\"\
--fromdefault\
--follow

[echoserver-5c6f84887-6kh9p]Generatingself-signedcert
[echoserver-5c6f84887-6kh9p]Generatinga2048bitRSAprivatekey
[echoserver-5c6f84887-6kh9p]..............................+++
[echoserver-5c6f84887-6kh9p]...............................................................................................................................................+++
[echoserver-5c6f84887-6kh9p]writingnewprivatekeyto'/certs/privateKey.key'
[echoserver-5c6f84887-6kh9p]-----
[echoserver-5c6f84887-6kh9p]Startingnginx
[echoserver-5c6f84887-6kh9p]10.233.85.136--[30/Jun/2021:00:08:00+0000]\"GET/HTTP/1.1\"200744\"-\"\"curl/7.68.0\"
[echoserver-5c6f84887-6kh9p]10.233.85.136--[30/Jun/2021:00:27:10+0000]\"GET/HTTP/1.1\"200744\"-\"\"curl/7.68.0\"

完成测试后,是时候关闭部署了:

akashtxdeploymentclose\
--node$AKASH_NODE\
--chain-id$AKASH_CHAIN_ID\
--dseq$AKASH_DSEQ\
--owner$AKASH_ACCOUNT_ADDRESS\
--fromdefault\
--gas-prices=\"0.025uakt\"--gas=\"****\"--gas-adjustment=1.15

Provider 将其视为预期的“deployment closed”、“teardown request”等:

Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.828]deploymentclosedmodule=provider-manifestdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.828]managerdonemodule=provider-manifestdeployment=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.829]teardownrequestmodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.830]shuttingdownmodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akashcmp=deployment-monitor
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.830]shutdowncompletemodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akashcmp=deployment-monitor
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.837]teardowncompletemodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.837]waitingondm.wgmodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.838]waitingonwithdrawalmodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.838]shuttingdownmodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akashcmp=deployment-withdrawal
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.838]shutdowncompletemodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akashcmp=deployment-withdrawal
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.838]shutdowncompletemodule=provider-clustercmp=servicecmp=deployment-managerlease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0manifest-group=akash
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.838]managerdonemodule=provider-clustercmp=servicelease=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1/akash1nxq8gmsw2vlz3m68qvyvcf3kh6q269ajvqw6y0
Jun3000:28:44akash1start-provider.sh[1029866]:D[2021-06-30|00:28:44.838]unreservingcapacitymodule=provider-clustercmp=servicecmp=inventory-serviceorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.838]attemptingtoremovingreservationmodule=provider-clustercmp=servicecmp=inventory-serviceorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.838]removingreservationmodule=provider-clustercmp=servicecmp=inventory-serviceorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:28:44akash1start-provider.sh[1029866]:I[2021-06-30|00:28:44.838]unreservecapacitycompletemodule=provider-clustercmp=servicecmp=inventory-serviceorder=akash1h24fljt7p0nh82cq0za0uhsct3sfwsfu9w3c9h/1585829/1/1
Jun3000:28:46akash1start-provider.sh[1029866]:I[2021-06-30|00:28:46.122]syncingsequencecmp=client/broadcasterlocal=36remote=36

八、关闭集群

以防万一,如果您想关闭 Kubernetes 集群:

systemctldisableakash-provider
systemctlstopakash-provider

kubectldrain<nodename>--delete-local-data--force--ignore-daemonsets

###kubectldeletenode<nodename>
kubeadmreset
iptables-F&&iptables-tnat-F&&iptables-tnat-X&&iptables-tmangle-F&&iptables-tmangle-X&&iptables-traw-F&&iptables-traw-X&&iptables-X
ip6tables-F&&ip6tables-tnat-F&&ip6tables-tnat-X&&ip6tables-tmangle-F&&ip6tables-tmangle-X&&ip6tables-traw-F&&ip6tables-traw-X&&ip6tables-X
ipvsadm-C
conntrack-F

##ifWeaveNetwasused:
weavereset(ifyouused)((or\"iplinkdeleteweave\"))

##ifCalicowasused:
iplink
iplinkdeletecali*
iplinkdeletevxlan.calico

modprobe-ripip

一些排错场景:

##ifgettingduring\"crictlrmp-a\"(deletingallpodsusingcrictl)
removingthepodsandbox\"f89d5f4987fbf80790e82eab1f5634480af814afdc82db8bca92dc5ed4b57120\":rpcerror:code=Unknowndesc=sandboxnetworknamespace\"/var/run/netns/cni-65fbbdd0-8af6-8c2a-0698-6ef8155ca441\"isnotfullyclosed

ipnetnsls
ip-allnetnsdelete

ps-ef|grep-E'runc|runsc|shim'
ipr
pidofrunsc-sandbox|xargs-rkill
pidof/usr/bin/containerd-shim-runc-v2|xargs-rkill-9
find/run/containerd/io.containerd.runtime.v2.task/-ls

rm-rf/etc/cni/net.d

systemctlrestartcontainerd
###systemctlrestartdocker

九、水平扩展您的 Akash provider

如果您想为新部署添加更多空间,您可以扩展您的 Akash Provider。

为此,获取新的裸机或 *** 主机并重复所有步骤,直到“3.2 使用 kubeadm 部署 Kubernetes 集群”(不包括)。 在新的 master 节点(控制平面)或 worker 节点上运行以下命令:

aptupdate
apt-ydist-upgrade
aptautoremove

apt-yinstallethtoolsocatconntrack

mkdir-p/etc/kubernetes/manifests

##IfyouareusingNodeLocalDNSCache
sed-i-s's/10.233.0.10/169.254.25.10/g'/var/lib/kubelet/config.yaml

在您现有的 master 节点(控制平面)上生成 token。您将需要它来加入新的 master 节点/ worker 节点。

如果添加新的 master 节点,请确保运行以下 upload-certs 阶段:

这是为了避免从您的 master 节点手动复制 /etc/kubernetes/pki 到新的 master 节点。

kubeadminitphaseupload-certs--upload-certs--configkubeadm-config.yaml

生成用于将新 master 节点或 worker 节点加入 kubernetes 集群的 Token:

kubeadmtokencreate--configkubeadm-config.yaml--print-join-command

要加入任意数量的 master 节点(控制平面),请运行以下命令:

kubeadmjoinakash-master-lb.domainXYZ.com:6443--tokenREDACTED.REDACTED--discovery-token-ca-cert-hashsha256:REDACTED--control-plane--certificate-keyREDACTED

要加入任意数量的 worker 节点,请运行以下命令:

kubeadmjoinakash-master-lb.domainXYZ.com:6443--tokenREDACTED.REDACTED--discovery-token-ca-cert-hashsha256:REDACTED

9.1 扩展 Ingress

现在您有 1 个以上的 worker 节点,您可以扩展 ingress-nginx-controller 以提高服务可用性。 为此,您只需要运行以下命令。

用 akash.network/role=ingress 给所有 worker 节点打标:

kubectllabelnodesakash-worker-<##>.nixaid.comakash.network/role=ingress

将ingress-nginx-controller扩展到您拥有的 woker 节点数量:

kubectl-ningress-nginxscaledeploymentingress-nginx-controller--replicas=<numberofworkernodes>

现在,给 worker 节点 IP 的注册新的 DNS A 泛域名,解析到运行着 ingress-nginx-controller 的 *.ingress.nixaid.com。

示例:

$dig+noall+answeranything.ingress.nixaid.com
anything.ingress.nixaid.com.1707INA167.86.73.47
anything.ingress.nixaid.com.1707INA185.211.5.95

十、已知问题和解决方案

Akash Provider 的收益在损失:https://github.com/ovrclk/akash/issues/1363

当 tag 相同时,provider 不会拉新的镜像:https://github.com/ovrclk/akash/issues/1354

当部署清单出现错误时,容器挂起(僵尸程序):https://github.com/ovrclk/akash/issues/1353

[netpol] akash-deployment-restrictions 阻止 POD 在 pod 子网通过 53/udp、53/tcp 访问 kube-dns :https://github.com/ovrclk/akash/issues/1339

十一、参考

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet -cgroup-driverhttps://storage.googleapis.com/kubernetes-release/release/stable.txthttps://gvisor.dev/docs/user_guide/containerd/quick_start/https://github.com/containernetworking/cni #how-do-i-use-cnihttps://docs.projectcalico.org/getting-started/kubernetes/quickstarthttps://kubernetes.io/docs/concepts/overview/components/https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.htmlhttps://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/https://docs.akash.network/operator/providerhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down

Akash相关链接

中文链接

推特:https://twitter.com/AkashCommunity
QQ群:http://t.cn/A6IayTx5群号:754793800
微博:https://weibo.com/akashchina
币乎:https://bihu.com/people/1117023356
币吧:https://akt.bihu.com/
discord:https://discord.gg/RZNyGWyg
电报:https://t.me/akashchinatalk
语雀:https://www.yuque.com/akashnetwork/index
akash官网:https://akash.network/?lang=zh-hans

英文链接:

Twitter:https://twitter.com/akashnet_
Facebook:https://www.facebook.com/akashnw/
LinkedIn:https://www.linkedin.com/company/akash-network/
Telegram:https://t.me/AkashNW
Github:https://github.com/ovrclk


扫码关注Akash

—-

编译者/作者:Akash

玩币族申明:玩币族作为开放的资讯翻译/分享平台,所提供的所有资讯仅代表作者个人观点,与玩币族平台立场无关,且不构成任何投资理财建议。文章版权归原作者所有。

LOADING...
LOADING...