使用ECK 快速部署 Elasticsearch 集群 + Kibana
部署 ECK [2.12]
安装说明
ElasticCloudonKubernetes(ECK)是一个 Elasticsearch Operator,但远不止于此。ECK 使用 Kubernetes Operator 模式构建而成,需要安装在您的 Kubernetes 集群内;
借助 Elastic Cloud on Kubernetes (ECK),您可以扩展基本的 Kubernetes 编排功能,以轻松部署、保护、升级 Elasticsearch 集群等。
基于Operator 模式的Elastic Cloud on Kubernetes,是 Elastic 推荐的在 Kubernetes 上部署 Elasticsearch、Kibana 和 APM Server 的方法。 ECK 有一个专用的 Helm 图表,可以 在 ECK 存储库(文档)中找到。
ECK中集成了大量es的运维工作:
- 管理和监测多个集群
- 轻松升级至新的版本
- 扩大或缩小集群容量
- 更改集群配置
- 动态调整本地存储的规模(包括 Elastic Local Volume(一款本地存储驱动器))
- 备份
支持版本
- Kubernetes 1.25-1.29
- OpenShift 4.11-4.14
- Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic - Kubernetes Service (EKS)
- Helm: 3.2.0+
- Elasticsearch, Kibana, APM Server: 6.8+, 7.1+, 8+
- Enterprise Search: 7.7+, 8+
- Beats: 7.0+, 8+
- Elastic Agent: 7.10+ (standalone), 7.14+ (Fleet), 8+
- Elastic Maps Server: 7.11+, 8+
- Logstash: 8.7+
从 ECK 1.3.0 开始,Helm 图表可用于安装 ECK。它可以从 Elastic Helm 存储库中获取,并且可以通过运行以下命令将其添加到您的 Helm 存储库列表中:
helm repo add elastic https://helm.elastic.co
helm repo update
# 查看图表的所有可用版本
helm search repo elastic/eck-operator --versions
Helm 支持的最低版本是 3.2.0。
限制安装
ECK operator 默认运行在elastic-system命名空间中。建议您为工作负载选择专用命名空间,而不是使用elastic-system或default命名空间。
安装CRD
此模式避免安装任何集群范围的资源,并限制操作员仅管理一组预定义的命名空间。
由于 CRD 是全局资源,因此它们仍然需要由管理员安装。这可以通过以下方式实现:
# 创建命名空间
kubectl create ns apm
# 指定版本安装
helm install --create-namespace -n apm elastic-operator-crds elastic/eck-operator-crds --version 2.12.1
这个操作可以由对他们希望管理的命名空间集具有完全访问权限的任何用户安装。
安装operator
以下示例将运算符安装到 elastic-system 命名空间,并将其配置为仅管理 elastic-system 和 apm:
# 下载指定版本图表
# 下载指定版本图表
helm pull elastic/eck-operator --version 2.12.1
tar zxvf eck-operator-2.12.1.tgz
helm upgrade --install elastic-operator elastic/eck-operator \
-n apm --create-namespace \
--values="eck-operator/values.yaml" \
--set=installCRDs=false \
--set=managedNamespaces='{apm,}' \
--set=createClusterScopedResources=false \
--set=webhook.enabled=false \
--set=config.validateStorageClass=false
eck-operator 图表包含多个预定义的配置文件,可帮助您在不同的配置中安装 operator。这些配置文件可以在图表目录的根目录中找到,以 profile- 为前缀。例如,前面的代码提取中所示的受限配置是在 profile-restricted.yaml 文件中定义的。
查看可用的配置选项
您可以通过运行以下命令来查看所有可配置值:
helm show values elastic/eck-operator -n apm
验证服务
验证一下是否安装成功
[root@node1 ~]# kubectl get pods -n apm
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 5m29s
监控 operator 日志:
kubectl -n apm logs -f statefulset.apps/elastic-operator
这个时候会安装上若干个 CRD 对象,当然这些 CRD 资源的控制器就在上面的 elastic-operator-0 这个 Pod 中:
$ kubectl get crd | grep elastic
agents.agent.k8s.elastic.co 2024-05-08T03:26:15Z
apmservers.apm.k8s.elastic.co 2024-05-08T03:26:15Z
beats.beat.k8s.elastic.co 2024-05-08T03:26:15Z
elasticmapsservers.maps.k8s.elastic.co 2024-05-08T03:26:15Z
elasticsearchautoscalers.autoscaling.k8s.elastic.co 2024-05-08T03:26:15Z
elasticsearches.elasticsearch.k8s.elastic.co 2024-05-08T03:26:15Z
enterprisesearches.enterprisesearch.k8s.elastic.co 2024-05-08T03:26:15Z
kibanas.kibana.k8s.elastic.co 2024-05-08T03:26:15Z
logstashes.logstash.k8s.elastic.co 2024-05-08T03:26:15Z
stackconfigpolicies.stackconfigpolicy.k8s.elastic.co 2024-05-08T03:26:15Z
然后我们可以利用 CRD 对象来创建一个非常简单的单个 Elasticsearch 集群
创建ES存储
创建存储类
创建华为云sfs存储类
创建文件 sfsturbo-es-sc.yaml
---
apiVersion: storage.k8s.io/v1
allowVolumeExpansion: true
kind: StorageClass
metadata:
name: sfsturbo-es-sc
mountOptions:
- vers=3
- nolock
- timeo=600
- hard
parameters:
csi.storage.k8s.io/csi-driver-name: sfsturbo.csi.everest.io
csi.storage.k8s.io/fstype: nfs
everest.io/archive-on-delete: "true"
everest.io/share-access-to: 4f9789b0-xxxx-xxxx-xxxx-cxxxx75dxxxx # subpath模式下,填写SFS Turbo资源的所在VPC的ID。
everest.io/share-export-location: 3967e677-xxxx-xxxx-xxxx-xxxxxxx8xxxx.sfsturbo.internal:/APM/Elasticsearch
everest.io/share-source: sfs-turbo
everest.io/volume-as: subpath # 该参数需设置为“subpath”来使用subpath模式。
everest.io/volume-id: 3967e677-xxxx-xxxx-xxxx-xxxx3xxxxxxx # SFS Turbo资源的卷ID
provisioner: everest-csi-provisioner
allowVolumeExpansion: true
volumeBindingMode: Immediate
reclaimPolicy: Retain
创建nfs存储类
1. 安装配置nfs
# 所有节点安装nfs客户端
# 本文k8s节点系统版本为 RockyLinux 9.2
yum install -y nfs-utils
2. 为nfs创建rabc
创建文件nfs-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f nfs-rbac.yaml
3. 创建nfs provisioner
创建文件nfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner
labels:
app: nfs-provisioner
spec:
ports:
- name: nfs
port: 2049
- name: nfs-udp
port: 2049
protocol: UDP
- name: nlockmgr
port: 32803
- name: nlockmgr-udp
port: 32803
protocol: UDP
- name: mountd
port: 20048
- name: mountd-udp
port: 20048
protocol: UDP
- name: rquotad
port: 875
- name: rquotad-udp
port: 875
protocol: UDP
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
- name: statd
port: 662
- name: statd-udp
port: 662
protocol: UDP
selector:
app: nfs-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-provisioner
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
# image: registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8
image: k8s.dockerproxy.com/sig-storage/nfs-provisioner:v4.0.8
ports:
- name: nfs
containerPort: 2049
- name: nfs-udp
containerPort: 2049
protocol: UDP
- name: nlockmgr
containerPort: 32803
- name: nlockmgr-udp
containerPort: 32803
protocol: UDP
- name: mountd
containerPort: 20048
- name: mountd-udp
containerPort: 20048
protocol: UDP
- name: rquotad
containerPort: 875
- name: rquotad-udp
containerPort: 875
protocol: UDP
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
- name: statd
containerPort: 662
- name: statd-udp
containerPort: 662
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=tiga.cc/nfs"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /data/nfs
创建nfs-provisioner
kubectl apply -f nfs-provisioner.yaml
**
查看nfs provisioner状态**
kubectl get pods --selector='app=nfs-provisioner'
输出
NAME READY STATUS RESTARTS AGE
nfs-provisioner-7d997c56c5-jhl2x 1/1 Running 0 15h
4. 创建StorageClass
创建文件nfs-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: tiga-nfs
provisioner: tiga.cc/nfs
mountOptions:
- vers=4.1
创建nfs stroage class
kubectl apply -f nfs-class.yaml
创建PVC 动态绑定
手动创建PVC 绑定,防止部署变更过程中存储卷变更!!!
手动创建以下pvc
- elasticsearch-data-es-quickstart-es-default-0
- elasticsearch-data-es-quickstart-es-default-1
- elasticsearch-data-es-quickstart-es-default-2
PVC 配置如下:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-data-es-quickstart-es-default-0
namespace: apm
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
storageClassName: sfsturbo-es-sc
volumeMode: Filesystem
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-data-es-quickstart-es-default-1
namespace: apm
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
storageClassName: sfsturbo-es-sc
volumeMode: Filesystem
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-data-es-quickstart-es-default-2
namespace: apm
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
storageClassName: sfsturbo-es-sc
volumeMode: Filesystem
手动创建pvc
kubectl apply -f pvc.yaml
部署Elasticsearch集群 [7.17.3]
如果您的 Kubernetes 集群没有任何具有至少 2GiB 可用内存的 Kubernetes 节点,则 pod 将陷入Pending状态。检查管理计算资源以获取有关资源要求以及如何配置资源的更多信息。
API接口文档参考:https://www.elastic.co/guide/en/cloud-on-k8s/1.0/k8s-elasticsearch-k8s-elastic-co-v1.html
利用 CRD 对象来创建 Elasticsearch 集群:
声明了要创建一个 7.17.3 版本的 Elasticsearch 资源对象:
以下示例:在 HTTP 网络层上禁用 TLS/SSL,确保 HTTP 在非加密端口上可用。
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: es-quickstart
namespace: "apm"
spec:
version: 7.17.3
updateStrategy:
changeBudget:
maxSurge: 1
maxUnavailable: 0
# https://www.elastic.co/guide/en/cloud-on-k8s/2.12/k8s-elasticsearch-specification.html
nodeSets:
- name: default
count: 3
config:
node.roles: ["master", "data", "ingest"]
# On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following:
#node.master: true
#node.data: true
#node.ingest: true
node.store.allow_mmap: false
node.attr.attr_name: attr_value
#开启跨域访问支持,默认为false
http.cors.enabled: true
#跨域访问允许的域名地址,(允许所有域名)以上使用正则
http.cors.allow-origin: /.*/
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
#设置节点在再次询问其对等方之后将等待多长时间,然后才认为请求失败。默认为3s.
discovery.request_peers_timeout: 30s
#设置集群中N个数据节点加入集群后就可以进行数据恢复
gateway.recover_after_data_nodes: 2
#设置初始化数据恢复进程的超时时间,默认是5分钟。
gateway.recover_after_time: 3m
#集群中预期的数据节点数。当预期数量的数据节点加入集群时,本地分片的恢复开始。默认为0.
gateway.expected_data_nodes: 3
#禁用内置令牌服务,确保 HTTP 在非加密端口上可用。
xpack.security.http.ssl.enabled: false
#启用自动创建索引
action.auto_create_index: true
#设置集群最大shard(分片)数
cluster.max_shards_per_node: 10000
podTemplate:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
env:
# 健康检测协议
- name: READINESS_PROBE_PROTOCOL
value: "http"
resources:
requests:
# ECK Operator默认申请4g内存
memory: 1Gi
cpu: 1
limits:
memory: 8Gi
cpu: 4
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # 除非你为数据路径设置了卷挂载,否则不要更改此名称。
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: sfsturbo-elasticsearch-sc
部署Elasticsearch 应用
kubectl apply -f elastic.yaml
如若部署不成功,请检查日志:
kubectl logs --tail=30 -f pod/elastic-operator-0 -n apm
验证服务
查看es集群信息
[root@node1 ~]# kubectl get elasticsearch -n apm
NAME HEALTH NODES VERSION PHASE AGE
es-quickstart green 3 7.17.3 Ready 6m54s
[root@node1 ~]# kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=es-quickstart' -n apm
NAME READY STATUS RESTARTS AGE
es-quickstart-es-default-0 1/1 Running 0 7m44s
es-quickstart-es-default-1 1/1 Running 0 7m44s
es-quickstart-es-default-2 1/1 Running 0 7m44s
[root@node1 ~]# kubectl get secret -n apm
NAME TYPE DATA AGE
default-secret kubernetes.io/dockerconfigjson 1 7h32m
es-quickstart-es-default-es-config Opaque 1 52m
es-quickstart-es-default-es-transport-certs Opaque 7 52m
es-quickstart-es-elastic-user Opaque 1 52m
es-quickstart-es-http-ca-internal Opaque 2 52m
es-quickstart-es-http-certs-internal Opaque 3 52m
es-quickstart-es-http-certs-public Opaque 2 52m
es-quickstart-es-internal-users Opaque 4 52m
es-quickstart-es-remote-ca Opaque 1 52m
es-quickstart-es-transport-ca-internal Opaque 2 52m
es-quickstart-es-transport-certs-public Opaque 1 52m
es-quickstart-es-xpack-file-realm Opaque 4 52m
paas.elb cfe/secure-opaque 1 7h32m
sh.helm.release.v1.elastic-operator-crds.v1 helm.sh/release.v1 1 6h35m
sh.helm.release.v1.elastic-operator.v1 helm.sh/release.v1 1 6h24m
sh.helm.release.v1.elastic-operator.v2 helm.sh/release.v1 1 6h22m
查看pvc
[root@node1 ~]# kubectl get pvc -n apm
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-es-quickstart-es-default-0 Bound pvc-1ac4866b-8b09-4a65-ac66-a979197588b6 50Gi RWO sfsturbo-es-sc 75m
elasticsearch-data-es-quickstart-es-default-1 Bound pvc-8bfc5118-2eba-403d-a705-4d3d179dbe79 50Gi RWO sfsturbo-es-sc 75m
elasticsearch-data-es-quickstart-es-default-2 Bound pvc-7f4b715b-a8da-4a03-80e7-9ad202d5882c 50Gi RWO sfsturbo-es-sc 75m
请求 Elasticsearch 访问权限
[root@node1 ~]# kubectl get service es-quickstart-es-http -n apm
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-quickstart-es-http ClusterIP 10.247.80.98 <none> 9200/TCP 11m
# 获取elastic用户密码(根据实际用户执行)
PASSWORD=$(kubectl get secret es-quickstart-es-elastic-user -n apm -o go-template='{{ index .data "elastic" | base64decode }}')
验证集群健康状态
kubectl exec es-quickstart-es-default-0 -n apm -- curl -s -u "elastic:$PASSWORD" -k "https://es-quickstart-es-http:9200/_cluster/health?pretty"
输出
Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)
1715163850 10:24:10 es-quickstart green 3 3 2 1 0 0 0 0 - 100.0%
部署Kibana [7.17.3]
部署kibana 指定命名空间与镜像版本
cat << EOF > kibana.yaml
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
namespace: apm
spec:
version: 7.17.3
count: 1
http:
tls:
selfSignedCertificate:
disabled: true
config:
#中文汉化
i18n.locale: "zh-CN"
# ElasticsearchRef 是对在同一 Kubernetes 集群中运行的 Elasticsearch 集群的引用。
elasticsearchRef:
# 与ECK管理的弹性资源对应的现有Kubernetes对象的名称。
name: es-quickstart
EOF
配置使用 HTTP 方式连接到ES集群
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
namespace: apm
spec:
version: 7.17.3
count: 1
http:
tls:
selfSignedCertificate:
disabled: true
config:
#中文汉化
i18n.locale: "zh-CN"
server.publicBaseUrl: "http://kibana.qshtest.com"
elasticsearch.hosts:
- http://es-quickstart-es-http.apm.svc:9200
elasticsearch.username: elastic
elasticsearch.password: "q8yg6903qOa7BNmo7199yjs2"
elasticsearch.requestHeadersWhitelist:
- authorization
podTemplate:
spec:
containers:
- name: kibana
env:
- name: NODE_OPTIONS
value: "--max-old-space-size=4096"
resources:
requests:
memory: 1Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 2
nodeSelector:
role: apm
部署应用
kubectl apply -f kibana.yaml
如果 Elasticsearch 集群与 Kibana 运行在同一个命名空间中,则命名空间的使用是可选的。可以指定一个额外的 serviceName 属性来面向自定义 Kubernetes 服务。有关详细信息,请参阅 Traffic Splitting 。
监控日志
日志打印如下,表示部署成功。
监控 Kibana 健康状况和创建进度。
与 Elasticsearch 类似,您可以检索有关 Kibana 实例的详细信息:
kubectl get kibana -n apm
以及相关的 Pod:
kubectl get pod -n apm --selector='kibana.k8s.elastic.co/name=quickstart'
访问 Kibana
ClusterIP系统会自动为 Kibana 创建一个Service:
kubectl get service quickstart-kb-http -n apm
用kubectl port-forward从本地工作站访问 Kibana:
kubectl port-forward service/quickstart-kb-http 5601
https://localhost:5601在浏览器中打开。您的浏览器将显示警告,因为默认配置的自签名证书未经过已知证书颁发机构的验证,并且不受您的浏览器信任。出于本快速入门的目的,您可以暂时确认该警告,但强烈建议您为任何生产部署配置有效的证书。
以elastic用户身份登录。可以通过以下命令获取密码:
kubectl get secret es-quickstart-es-elastic-user -n apm -o=jsonpath='{.data.elastic}' | base64 --decode; echo
API接口调试
ES7 集群状态信息 API
获取集群健康状态
GET /_cluster/health
使用 GET 请求来获取集群的健康状态。以下是一个示例:
kubectl exec es-quickstart-es-default-0 -n apm -- curl -s -u "elastic:$PASSWORD" -k "https://es-quickstart-es-http:9200/_cluster/health?pretty"
获取索引信息
使用 GET 请求来获取有关集群中索引的信息。以下是一个示例:
GET /_cat/indices?v
更多API接口文档:https://www.elastic.co/guide/en/elasticsearch/reference/7.17/rest-apis.html
安装elasticsearch-head插件
下载es-head插件
mkdir files
# 下载源码包
wget -O files/elasticsearch-head-master.zip https://github.com/mobz/elasticsearch-head/archive/refs/heads/master.zip
Dockerfile 编写
FROM node:alpine
WORKDIR /opt/
COPY files/elasticsearch-head-master.zip .
RUN apk -U add zip unzip && \
rm -rf /var/cache/apk/*
RUN unzip elasticsearch-head-master.zip \
&& rm -rf elasticsearch-head-master.zip
WORKDIR /opt/elasticsearch-head-master
RUN npm install grunt-cli
EXPOSE 9100
CMD [ "/bin/sh", "-c", "npm run start" ]
build 构建
docker build -t elasticsearch-head:latest .
sudo docker tag elasticsearch-head:latest swr.cn-north-4.myhuaweicloud.com/ops-tools/elasticsearch-head:latest
sudo docker push swr.cn-north-4.myhuaweicloud.com/ops-tools/elasticsearch-head:latest
Deployment 编写
cat << EOF > elasticsearch-head.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch-head
namespace: apm
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-head
template:
metadata:
labels:
app: elasticsearch-head
spec:
containers:
- name: elasticsearch-head
image: swr.cn-north-4.myhuaweicloud.com/ops-tools/elasticsearch-head:latest
imagePullSecrets:
- name: default-registry-secret
---
apiVersion: v1 # 资源版本
kind: Service # 资源类型
metadata: # 元数据
name: elasticsearch-head # 资源名称
namespace: apm # 命名空间
spec: # 描述
selector: # 标签选择器,用于确定当前service代理哪些pod
app: elasticsearch-head
type: NodePort # service类型
ports: # 端口信息
- protocol: TCP
name: elasticsearch-head
port: 9100 # service端口
targetPort: 9100 # pod端口
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-es-head
namespace: apm
spec:
ingressClassName: nginx-ingress
rules:
- host: es-head.qsh.cn
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: elasticsearch-head
port:
number: 9100
EOF
部署应用
kubectl apply -f elasticsearch-head.yaml
部署后,浏览器访问 http://es-head.qsh.cn
原文地址:https://blog.csdn.net/qianghong000/article/details/144459229
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!