离线安装ks v3.3.0

环境准备

环境说明:

  • 操作系统:Rocky Linux release 9.0 (Blue Onyx)
  • 私有镜像库: harbor(http -> http://192.168.1.3)
  • 样例主机IP:192.168.1.2
  • 样例主机配置: 8C 16G 100G

镜像准备

下载以下镜像并导入本地私有harbor库内(创建对应的project,如kubesphere/openpitrix-jobs:v3.2.1需要创建kubesphere项目并设置公开权限 )

kubesphere/openpitrix-jobs:v3.2.1
kubesphere/kube-apiserver:v1.23.9
kubesphere/kube-controller-manager:v1.23.9
kubesphere/kube-proxy:v1.23.9
kubesphere/kube-scheduler:v1.23.9
openebs/provisioner-localpv:3.3.0
openebs/linux-utils:3.3.0
kubesphere/ks-installer:v3.3.0
calico/kube-controllers:v3.23.2
calico/cni:v3.23.2
calico/pod2daemon-flexvol:v3.23.2
calico/node:v3.23.2
kubesphere/ks-controller-manager:v3.3.0
kubesphere/ks-apiserver:v3.3.0
kubesphere/ks-console:v3.3.0
kubesphere/ks-jenkins:v3.3.0-2.319.1
kubesphere/fluent-bit:v1.8.11
kubesphere/s2ioperator:v3.2.1
argoproj/argocd:v2.3.3
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
prom/prometheus:v2.34.0
kubesphere/fluentbit-operator:v0.13.0
argoproj/argocd-applicationset:v0.4.1
kubesphere/kube-events-ruler:v0.4.0
kubesphere/kube-events-operator:v0.4.0
kubesphere/kube-events-exporter:v0.4.0
kubesphere/elasticsearch-oss:6.8.22
kubesphere/kube-state-metrics:v2.3.0
prom/node-exporter:v1.3.1
library/redis:6.2.6-alpine
dexidp/dex:v2.30.2
library/alpine:3.14
kubesphere/kubectl:v1.22.0
kubesphere/notification-manager:v1.4.0
jaegertracing/jaeger-operator:1.27
coredns/coredns:1.8.6
jaegertracing/jaeger-collector:1.27
jaegertracing/jaeger-query:1.27
jaegertracing/jaeger-agent:1.27
kubesphere/notification-tenant-sidecar:v3.2.0
kubesphere/notification-manager-operator:v1.4.0
kubesphere/pause:3.6
prom/alertmanager:v0.23.0
istio/pilot:1.11.1
kubesphere/kube-auditing-operator:v0.2.0
kubesphere/kube-auditing-webhook:v0.2.0
kubesphere/kube-rbac-proxy:v0.11.0
kubesphere/kiali-operator:v1.38.1
kubesphere/kiali:v1.38
kubesphere/metrics-server:v0.4.2
jimmidyson/configmap-reload:v0.5.0
csiplugin/snapshot-controller:v4.0.0
kubesphere/kube-rbac-proxy:v0.8.0
library/docker:19.03
kubesphere/log-sidecar-injector:1.1
osixia/openldap:1.3.0
kubesphere/k8s-dns-node-cache:1.15.12
minio/mc:RELEASE.2019-08-07T23-14-43Z
minio/minio:RELEASE.2019-08-07T01-59-21Z
mirrorgooglecontainers/defaultbackend-amd64:1.4

介质准备

  1. 下载离线必需安装介质

kubekey-v2.3.0-rc.1-linux-amd64.tar.gz

文件目录结构如下

$ /work
├── kubekey
│   ├── cni
│   │   └── v0.9.1
│   │       └── amd64
│   │           └── cni-plugins-linux-amd64-v0.9.1.tgz
│   ├── crictl
│   │   └── v1.24.0
│   │       └── amd64
│   │           └── crictl-v1.24.0-linux-amd64.tar.gz
│   ├── docker
│   │   └── 20.10.8
│   │       └── amd64
│   │           └── docker-20.10.8.tgz
│   ├── etcd
│   │   └── v3.4.13
│   │       └── amd64
│   │           └── etcd-v3.4.13-linux-amd64.tar.gz
│   ├── helm
│   │   └── v3.9.0
│   │       └── amd64
│   │           └── helm-v3.9.0-linux-amd64.tar.gz
│   ├── kube
│   │   └── v1.23.9
│   │       └── amd64
│   │           ├── kubeadm
│   │           ├── kubectl
│   │           └── kubelet
└── kubekey-v2.3.0-rc.1-linux-amd64.tar.gz
  1. 部分介质解压
$ cd /work/kubekey/helm/v3.9.0/amd64 && tar -zxf helm-v3.9.0-linux-amd64.tar.gz && mv linux-amd64/helm . && rm -rf *linux-amd64* && cd -
$ cd /work && tar zxvf kubekey-v2.3.0-rc.1-linux-amd64.tar.gz

配置本地镜像源

  1. 挂载镜像DVD

  2. 挂载至本地

$ mount -o loop /dev/cdrom /media
  1. 配置本地镜像源
$ rm -rf /etc/yum.repos.d/*
$ tee /etc/yum.repos.d/media.repo <<EOF
[media-baseos]
name=Rocky Linux $releasever - Media - BaseOS
baseurl=file:///media/BaseOS
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9

[media-appstream]
name=Rocky Linux $releasever - Media - AppStream
baseurl=file:///media/AppStream
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
EOF
  1. 建立缓存
$ dnf makecache

安装部署

  1. 安装依赖
$ dnf install conntrack socat chrony ipvsadm -y
  1. 初始化配置文件
$ ./kk create config --with-kubesphere v3.3.0
  1. 调整配置

样例信息已脱敏,仅作说明使用,变更内容如下:

  • 配置hosts节点与角色组(hosts、roleGroups)
  • 配置私有镜像库(privateRegistry、insecureRegistries)
  • 注释掉controlPlaneEndpoint
  • 开启以下组件:
    • alerting
    • auditing
    • devops
    • events
    • logging
    • metrics_server
    • openpitrix
    • servicemesh
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}
  roleGroups:
    etcd:
    - node1
    control-plane:
    - node1
    worker:
    - node1
  #controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

   # domain: lb.kubesphere.local
   # address: ""
   # port: 6443
  kubernetes:
    version: v1.23.9
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: "harbor.wl.io"
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: ["harbor.wl.io"]
  addons: []
  1. 配置harbor host解析

由于habror使用的是假域名,需要配置自定义解析

$ echo "192.168.1.3 harbor.wl.io" >> /etc/hosts
  1. 创建dns配置文件
$ touch /etc/resolv.conf

否则初始化沙箱会异常

$ Sep 15 08:48:39 node1 kubelet[35254]: E0915 08:48:39.708357   35254 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-node1_kube-system(868ca46a733b98e2a3523d80b3c75243)\" with CreatePodSandboxError: \"Failed to generate sandbox config for pod \\\"kube-scheduler-node1_kube-system(868ca46a733b98e2a3523d80b3c75243)\\\": open /etc/resolv.conf: no such file or directory\"" pod="kube-system/kube-scheduler-node1" podUID=868ca46a733b98e2a3523d80b3c75243
  1. 初始化集群
$ ./kk create cluster --with-kubesphere v3.3.0 -f config-sample.yaml -y
  1. 安装补全
$ dnf install -y bash-completion
$ source /usr/share/bash-completion/bash_completion
$ source <(kubectl completion bash)
$ echo "source <(kubectl completion bash)" >> ~/.bashrc
  1. 配置内网dns(可选)

设置DNS

$ nmcli connection modify ens192 ipv4.dns "10.10.1.254"
$ nmcli connection up ens192
  • ens192: 网卡名称
  • 10.10.1.254: dns地址

  • 配置core dns解析

加入自定义Host

$ kubectl edit configmap coredns -n kube-system

修改前:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-09-15T00:48:59Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "232"
  uid: 4a4a69f2-b151-4323-b5b2-ae9d2867e58f

修改后

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        hosts {
          192.168.1.3 harbor.wl.io
          fallthrough
        }
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-09-15T00:48:59Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "232"
  uid: 4a4a69f2-b151-4323-b5b2-ae9d2867e58f

重载

$ kubectl rollout restart deploy coredns -n kube-system
  1. 修改nodelocaldns
$ kubectl edit cm -n kube-system nodelocaldns

修改前

apiVersion: v1
data:
  Corefile: |
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . /etc/resolv.conf
        prometheus :9253
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":"cluster.local:53 {\n    errors\n    cache {\n        success 9984 30\n        denial 9984 5\n    }\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n    health 169.254.25.10:9254\n}\nin-addr.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\nip6.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\n.:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . /etc/resolv.conf\n    prometheus :9253\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"nodelocaldns","namespace":"kube-system"}}
  creationTimestamp: "2022-09-15T00:49:03Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: nodelocaldns
  namespace: kube-system
  resourceVersion: "368"
  uid: adb09cd0-b5c1-4939-98bf-b48bfb5418ce

修改后

apiVersion: v1
data:
  Corefile: |
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    in-addr.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    ip6.arpa:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":"cluster.local:53 {\n    errors\n    cache {\n        success 9984 30\n        denial 9984 5\n    }\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n    health 169.254.25.10:9254\n}\nin-addr.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\nip6.arpa:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . 10.233.0.3 {\n        force_tcp\n    }\n    prometheus :9253\n}\n.:53 {\n    errors\n    cache 30\n    reload\n    loop\n    bind 169.254.25.10\n    forward . /etc/resolv.conf\n    prometheus :9253\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"nodelocaldns","namespace":"kube-system"}}
  creationTimestamp: "2022-09-15T00:49:03Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: nodelocaldns
  namespace: kube-system
  resourceVersion: "6905"
  uid: adb09cd0-b5c1-4939-98bf-b48bfb5418ce

即修改.:53 {}块内容

重载

$ kubectl rollout restart ds nodelocaldns -n kube-system
Copyright © weiliang 2021 all right reserved,powered by Gitbook本书发布时间: 2024-04-22 16:03:41

results matching ""

    No results matching ""